Jul 9 10:11:43.879042 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 9 10:11:43.879063 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Jul 9 08:35:24 -00 2025 Jul 9 10:11:43.879072 kernel: KASLR enabled Jul 9 10:11:43.879078 kernel: efi: EFI v2.7 by EDK II Jul 9 10:11:43.879084 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 9 10:11:43.879089 kernel: random: crng init done Jul 9 10:11:43.879096 kernel: secureboot: Secure boot disabled Jul 9 10:11:43.879102 kernel: ACPI: Early table checksum verification disabled Jul 9 10:11:43.879108 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 9 10:11:43.879116 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 9 10:11:43.879122 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:11:43.879128 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:11:43.879133 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:11:43.879139 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:11:43.879146 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:11:43.879154 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:11:43.879160 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:11:43.879166 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:11:43.879172 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:11:43.879178 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 9 10:11:43.879184 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 9 10:11:43.879190 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 10:11:43.879196 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Jul 9 10:11:43.879202 kernel: Zone ranges: Jul 9 10:11:43.879208 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 10:11:43.879215 kernel: DMA32 empty Jul 9 10:11:43.879221 kernel: Normal empty Jul 9 10:11:43.879227 kernel: Device empty Jul 9 10:11:43.879232 kernel: Movable zone start for each node Jul 9 10:11:43.879238 kernel: Early memory node ranges Jul 9 10:11:43.879244 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 9 10:11:43.879250 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 9 10:11:43.879256 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 9 10:11:43.879262 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 9 10:11:43.879268 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 9 10:11:43.879274 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 9 10:11:43.879280 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 9 10:11:43.879287 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 9 10:11:43.879293 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 9 10:11:43.879299 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 9 10:11:43.879307 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 9 10:11:43.879314 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 9 10:11:43.879320 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 9 10:11:43.879328 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 10:11:43.879334 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 9 10:11:43.879341 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jul 9 10:11:43.879347 kernel: psci: probing for conduit method from ACPI. Jul 9 10:11:43.879353 kernel: psci: PSCIv1.1 detected in firmware. Jul 9 10:11:43.879360 kernel: psci: Using standard PSCI v0.2 function IDs Jul 9 10:11:43.879366 kernel: psci: Trusted OS migration not required Jul 9 10:11:43.879372 kernel: psci: SMC Calling Convention v1.1 Jul 9 10:11:43.879379 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 9 10:11:43.879385 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 9 10:11:43.879393 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 9 10:11:43.879399 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 9 10:11:43.879406 kernel: Detected PIPT I-cache on CPU0 Jul 9 10:11:43.879412 kernel: CPU features: detected: GIC system register CPU interface Jul 9 10:11:43.879418 kernel: CPU features: detected: Spectre-v4 Jul 9 10:11:43.879425 kernel: CPU features: detected: Spectre-BHB Jul 9 10:11:43.879431 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 9 10:11:43.879437 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 9 10:11:43.879444 kernel: CPU features: detected: ARM erratum 1418040 Jul 9 10:11:43.879450 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 9 10:11:43.879456 kernel: alternatives: applying boot alternatives Jul 9 10:11:43.879464 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=74a33b1d464884e3b2573e51f747b6939e1912812116b4748b2b08804b5b74c1 Jul 9 10:11:43.879472 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 10:11:43.879478 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 10:11:43.879484 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 10:11:43.879491 kernel: Fallback order for Node 0: 0 Jul 9 10:11:43.879497 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 9 10:11:43.879503 kernel: Policy zone: DMA Jul 9 10:11:43.879510 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 10:11:43.879516 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 9 10:11:43.879523 kernel: software IO TLB: area num 4. Jul 9 10:11:43.879529 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 9 10:11:43.879535 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jul 9 10:11:43.879543 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 9 10:11:43.879549 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 10:11:43.879556 kernel: rcu: RCU event tracing is enabled. Jul 9 10:11:43.879563 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 9 10:11:43.879569 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 10:11:43.879576 kernel: Tracing variant of Tasks RCU enabled. Jul 9 10:11:43.879582 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 10:11:43.879589 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 9 10:11:43.879595 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 10:11:43.879602 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 10:11:43.879608 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 9 10:11:43.879616 kernel: GICv3: 256 SPIs implemented Jul 9 10:11:43.879622 kernel: GICv3: 0 Extended SPIs implemented Jul 9 10:11:43.879628 kernel: Root IRQ handler: gic_handle_irq Jul 9 10:11:43.879634 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 9 10:11:43.879641 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 9 10:11:43.879647 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 9 10:11:43.879654 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 9 10:11:43.879667 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 9 10:11:43.879685 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 9 10:11:43.879692 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 9 10:11:43.879698 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 9 10:11:43.879705 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 10:11:43.879714 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 10:11:43.879720 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 9 10:11:43.879727 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 9 10:11:43.879734 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 9 10:11:43.879740 kernel: arm-pv: using stolen time PV Jul 9 10:11:43.879747 kernel: Console: colour dummy device 80x25 Jul 9 10:11:43.879754 kernel: ACPI: Core revision 20240827 Jul 9 10:11:43.879761 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 9 10:11:43.879767 kernel: pid_max: default: 32768 minimum: 301 Jul 9 10:11:43.879774 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 9 10:11:43.879782 kernel: landlock: Up and running. Jul 9 10:11:43.879789 kernel: SELinux: Initializing. Jul 9 10:11:43.879795 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 10:11:43.879802 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 10:11:43.879809 kernel: rcu: Hierarchical SRCU implementation. Jul 9 10:11:43.879816 kernel: rcu: Max phase no-delay instances is 400. Jul 9 10:11:43.879823 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 9 10:11:43.879829 kernel: Remapping and enabling EFI services. Jul 9 10:11:43.879836 kernel: smp: Bringing up secondary CPUs ... Jul 9 10:11:43.879849 kernel: Detected PIPT I-cache on CPU1 Jul 9 10:11:43.879856 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 9 10:11:43.879864 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 9 10:11:43.879872 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 10:11:43.879879 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 9 10:11:43.879886 kernel: Detected PIPT I-cache on CPU2 Jul 9 10:11:43.879894 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 9 10:11:43.879901 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 9 10:11:43.879910 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 10:11:43.879916 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 9 10:11:43.879923 kernel: Detected PIPT I-cache on CPU3 Jul 9 10:11:43.879931 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 9 10:11:43.879938 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 9 10:11:43.879945 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 10:11:43.879952 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 9 10:11:43.879959 kernel: smp: Brought up 1 node, 4 CPUs Jul 9 10:11:43.879965 kernel: SMP: Total of 4 processors activated. Jul 9 10:11:43.879974 kernel: CPU: All CPU(s) started at EL1 Jul 9 10:11:43.879981 kernel: CPU features: detected: 32-bit EL0 Support Jul 9 10:11:43.879988 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 9 10:11:43.879995 kernel: CPU features: detected: Common not Private translations Jul 9 10:11:43.880001 kernel: CPU features: detected: CRC32 instructions Jul 9 10:11:43.880009 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 9 10:11:43.880016 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 9 10:11:43.880023 kernel: CPU features: detected: LSE atomic instructions Jul 9 10:11:43.880030 kernel: CPU features: detected: Privileged Access Never Jul 9 10:11:43.880038 kernel: CPU features: detected: RAS Extension Support Jul 9 10:11:43.880045 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 9 10:11:43.880052 kernel: alternatives: applying system-wide alternatives Jul 9 10:11:43.880059 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 9 10:11:43.880067 kernel: Memory: 2424032K/2572288K available (11136K kernel code, 2436K rwdata, 9056K rodata, 39424K init, 1038K bss, 125920K reserved, 16384K cma-reserved) Jul 9 10:11:43.880074 kernel: devtmpfs: initialized Jul 9 10:11:43.880081 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 10:11:43.880088 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 9 10:11:43.880096 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 9 10:11:43.880104 kernel: 0 pages in range for non-PLT usage Jul 9 10:11:43.880111 kernel: 508448 pages in range for PLT usage Jul 9 10:11:43.880118 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 10:11:43.880125 kernel: SMBIOS 3.0.0 present. Jul 9 10:11:43.880132 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 9 10:11:43.880139 kernel: DMI: Memory slots populated: 1/1 Jul 9 10:11:43.880146 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 10:11:43.880153 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 9 10:11:43.880160 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 9 10:11:43.880169 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 9 10:11:43.880176 kernel: audit: initializing netlink subsys (disabled) Jul 9 10:11:43.880183 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 9 10:11:43.880190 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 10:11:43.880197 kernel: cpuidle: using governor menu Jul 9 10:11:43.880204 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 9 10:11:43.880211 kernel: ASID allocator initialised with 32768 entries Jul 9 10:11:43.880218 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 10:11:43.880225 kernel: Serial: AMBA PL011 UART driver Jul 9 10:11:43.880234 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 9 10:11:43.880241 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 9 10:11:43.880248 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 9 10:11:43.880255 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 9 10:11:43.880262 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 10:11:43.880269 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 10:11:43.880276 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 9 10:11:43.880283 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 9 10:11:43.880290 kernel: ACPI: Added _OSI(Module Device) Jul 9 10:11:43.880297 kernel: ACPI: Added _OSI(Processor Device) Jul 9 10:11:43.880305 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 10:11:43.880312 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 10:11:43.880320 kernel: ACPI: Interpreter enabled Jul 9 10:11:43.880327 kernel: ACPI: Using GIC for interrupt routing Jul 9 10:11:43.880334 kernel: ACPI: MCFG table detected, 1 entries Jul 9 10:11:43.880341 kernel: ACPI: CPU0 has been hot-added Jul 9 10:11:43.880347 kernel: ACPI: CPU1 has been hot-added Jul 9 10:11:43.880354 kernel: ACPI: CPU2 has been hot-added Jul 9 10:11:43.880362 kernel: ACPI: CPU3 has been hot-added Jul 9 10:11:43.880370 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 9 10:11:43.880377 kernel: printk: legacy console [ttyAMA0] enabled Jul 9 10:11:43.880384 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 9 10:11:43.880527 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 9 10:11:43.880601 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 9 10:11:43.880683 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 9 10:11:43.880752 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 9 10:11:43.880816 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 9 10:11:43.880825 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 9 10:11:43.880833 kernel: PCI host bridge to bus 0000:00 Jul 9 10:11:43.880901 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 9 10:11:43.880958 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 9 10:11:43.881014 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 9 10:11:43.881069 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 9 10:11:43.881152 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 9 10:11:43.881225 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 9 10:11:43.881289 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 9 10:11:43.881353 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 9 10:11:43.881415 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 9 10:11:43.881478 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 9 10:11:43.881540 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 9 10:11:43.881605 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 9 10:11:43.881667 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 9 10:11:43.881799 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 9 10:11:43.881856 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 9 10:11:43.881865 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 9 10:11:43.881872 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 9 10:11:43.881879 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 9 10:11:43.881890 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 9 10:11:43.881897 kernel: iommu: Default domain type: Translated Jul 9 10:11:43.881904 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 9 10:11:43.881911 kernel: efivars: Registered efivars operations Jul 9 10:11:43.881918 kernel: vgaarb: loaded Jul 9 10:11:43.881925 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 9 10:11:43.881932 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 10:11:43.881938 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 10:11:43.881945 kernel: pnp: PnP ACPI init Jul 9 10:11:43.882024 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 9 10:11:43.882034 kernel: pnp: PnP ACPI: found 1 devices Jul 9 10:11:43.882041 kernel: NET: Registered PF_INET protocol family Jul 9 10:11:43.882049 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 10:11:43.882056 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 10:11:43.882063 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 10:11:43.882070 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 10:11:43.882077 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 10:11:43.882087 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 10:11:43.882094 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 10:11:43.882101 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 10:11:43.882108 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 10:11:43.882115 kernel: PCI: CLS 0 bytes, default 64 Jul 9 10:11:43.882122 kernel: kvm [1]: HYP mode not available Jul 9 10:11:43.882129 kernel: Initialise system trusted keyrings Jul 9 10:11:43.882136 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 10:11:43.882143 kernel: Key type asymmetric registered Jul 9 10:11:43.882149 kernel: Asymmetric key parser 'x509' registered Jul 9 10:11:43.882158 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 9 10:11:43.882165 kernel: io scheduler mq-deadline registered Jul 9 10:11:43.882172 kernel: io scheduler kyber registered Jul 9 10:11:43.882179 kernel: io scheduler bfq registered Jul 9 10:11:43.882186 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 9 10:11:43.882193 kernel: ACPI: button: Power Button [PWRB] Jul 9 10:11:43.882201 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 9 10:11:43.882264 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 9 10:11:43.882274 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 10:11:43.882283 kernel: thunder_xcv, ver 1.0 Jul 9 10:11:43.882289 kernel: thunder_bgx, ver 1.0 Jul 9 10:11:43.882296 kernel: nicpf, ver 1.0 Jul 9 10:11:43.882303 kernel: nicvf, ver 1.0 Jul 9 10:11:43.882371 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 9 10:11:43.882429 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-09T10:11:43 UTC (1752055903) Jul 9 10:11:43.882438 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 9 10:11:43.882445 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 9 10:11:43.882454 kernel: watchdog: NMI not fully supported Jul 9 10:11:43.882461 kernel: watchdog: Hard watchdog permanently disabled Jul 9 10:11:43.882468 kernel: NET: Registered PF_INET6 protocol family Jul 9 10:11:43.882475 kernel: Segment Routing with IPv6 Jul 9 10:11:43.882482 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 10:11:43.882489 kernel: NET: Registered PF_PACKET protocol family Jul 9 10:11:43.882496 kernel: Key type dns_resolver registered Jul 9 10:11:43.882502 kernel: registered taskstats version 1 Jul 9 10:11:43.882509 kernel: Loading compiled-in X.509 certificates Jul 9 10:11:43.882518 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 3af455426f266805bd3cf61871c72c3a0bf9894a' Jul 9 10:11:43.882525 kernel: Demotion targets for Node 0: null Jul 9 10:11:43.882532 kernel: Key type .fscrypt registered Jul 9 10:11:43.882539 kernel: Key type fscrypt-provisioning registered Jul 9 10:11:43.882546 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 10:11:43.882552 kernel: ima: Allocated hash algorithm: sha1 Jul 9 10:11:43.882559 kernel: ima: No architecture policies found Jul 9 10:11:43.882566 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 9 10:11:43.882575 kernel: clk: Disabling unused clocks Jul 9 10:11:43.882582 kernel: PM: genpd: Disabling unused power domains Jul 9 10:11:43.882589 kernel: Warning: unable to open an initial console. Jul 9 10:11:43.882596 kernel: Freeing unused kernel memory: 39424K Jul 9 10:11:43.882603 kernel: Run /init as init process Jul 9 10:11:43.882610 kernel: with arguments: Jul 9 10:11:43.882616 kernel: /init Jul 9 10:11:43.882623 kernel: with environment: Jul 9 10:11:43.882630 kernel: HOME=/ Jul 9 10:11:43.882636 kernel: TERM=linux Jul 9 10:11:43.882645 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 10:11:43.882652 systemd[1]: Successfully made /usr/ read-only. Jul 9 10:11:43.882670 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 10:11:43.882687 systemd[1]: Detected virtualization kvm. Jul 9 10:11:43.882695 systemd[1]: Detected architecture arm64. Jul 9 10:11:43.882702 systemd[1]: Running in initrd. Jul 9 10:11:43.882709 systemd[1]: No hostname configured, using default hostname. Jul 9 10:11:43.882719 systemd[1]: Hostname set to . Jul 9 10:11:43.882726 systemd[1]: Initializing machine ID from VM UUID. Jul 9 10:11:43.882733 systemd[1]: Queued start job for default target initrd.target. Jul 9 10:11:43.882741 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 10:11:43.882748 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 10:11:43.882756 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 10:11:43.882764 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 10:11:43.882771 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 10:11:43.882781 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 10:11:43.882789 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 10:11:43.882797 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 10:11:43.882804 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 10:11:43.882812 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 10:11:43.882820 systemd[1]: Reached target paths.target - Path Units. Jul 9 10:11:43.882827 systemd[1]: Reached target slices.target - Slice Units. Jul 9 10:11:43.882836 systemd[1]: Reached target swap.target - Swaps. Jul 9 10:11:43.882843 systemd[1]: Reached target timers.target - Timer Units. Jul 9 10:11:43.882851 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 10:11:43.882858 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 10:11:43.882866 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 10:11:43.882874 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 10:11:43.882881 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 10:11:43.882888 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 10:11:43.882897 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 10:11:43.882904 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 10:11:43.882912 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 10:11:43.882919 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 10:11:43.882927 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 10:11:43.882934 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 9 10:11:43.882942 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 10:11:43.882949 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 10:11:43.882957 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 10:11:43.882966 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 10:11:43.882973 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 10:11:43.882981 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 10:11:43.882988 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 10:11:43.882997 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 10:11:43.883023 systemd-journald[244]: Collecting audit messages is disabled. Jul 9 10:11:43.883042 systemd-journald[244]: Journal started Jul 9 10:11:43.883062 systemd-journald[244]: Runtime Journal (/run/log/journal/a9644dfd74544bd1b037dd98e3893b05) is 6M, max 48.5M, 42.4M free. Jul 9 10:11:43.892832 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 10:11:43.873298 systemd-modules-load[245]: Inserted module 'overlay' Jul 9 10:11:43.895273 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 10:11:43.899183 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 10:11:43.899210 kernel: Bridge firewalling registered Jul 9 10:11:43.899745 systemd-modules-load[245]: Inserted module 'br_netfilter' Jul 9 10:11:43.900967 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 10:11:43.902710 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 10:11:43.915878 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 10:11:43.917199 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 10:11:43.921036 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 10:11:43.922595 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 10:11:43.924108 systemd-tmpfiles[264]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 9 10:11:43.929559 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 10:11:43.933327 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 10:11:43.934762 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 10:11:43.936926 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 10:11:43.940138 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 10:11:43.942480 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 10:11:43.966113 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=74a33b1d464884e3b2573e51f747b6939e1912812116b4748b2b08804b5b74c1 Jul 9 10:11:43.979929 systemd-resolved[289]: Positive Trust Anchors: Jul 9 10:11:43.979948 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 10:11:43.979979 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 10:11:43.984785 systemd-resolved[289]: Defaulting to hostname 'linux'. Jul 9 10:11:43.985721 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 10:11:43.989167 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 10:11:44.039689 kernel: SCSI subsystem initialized Jul 9 10:11:44.043691 kernel: Loading iSCSI transport class v2.0-870. Jul 9 10:11:44.052701 kernel: iscsi: registered transport (tcp) Jul 9 10:11:44.067701 kernel: iscsi: registered transport (qla4xxx) Jul 9 10:11:44.067730 kernel: QLogic iSCSI HBA Driver Jul 9 10:11:44.088783 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 10:11:44.108053 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 10:11:44.110826 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 10:11:44.156734 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 10:11:44.159069 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 10:11:44.215704 kernel: raid6: neonx8 gen() 15735 MB/s Jul 9 10:11:44.232697 kernel: raid6: neonx4 gen() 12033 MB/s Jul 9 10:11:44.249703 kernel: raid6: neonx2 gen() 10798 MB/s Jul 9 10:11:44.266703 kernel: raid6: neonx1 gen() 10149 MB/s Jul 9 10:11:44.283700 kernel: raid6: int64x8 gen() 6852 MB/s Jul 9 10:11:44.300697 kernel: raid6: int64x4 gen() 7353 MB/s Jul 9 10:11:44.317698 kernel: raid6: int64x2 gen() 6104 MB/s Jul 9 10:11:44.334822 kernel: raid6: int64x1 gen() 5052 MB/s Jul 9 10:11:44.334836 kernel: raid6: using algorithm neonx8 gen() 15735 MB/s Jul 9 10:11:44.352819 kernel: raid6: .... xor() 12043 MB/s, rmw enabled Jul 9 10:11:44.352838 kernel: raid6: using neon recovery algorithm Jul 9 10:11:44.358735 kernel: xor: measuring software checksum speed Jul 9 10:11:44.358752 kernel: 8regs : 21636 MB/sec Jul 9 10:11:44.360078 kernel: 32regs : 21664 MB/sec Jul 9 10:11:44.360090 kernel: arm64_neon : 27974 MB/sec Jul 9 10:11:44.360100 kernel: xor: using function: arm64_neon (27974 MB/sec) Jul 9 10:11:44.427715 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 10:11:44.436014 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 10:11:44.439171 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 10:11:44.474756 systemd-udevd[500]: Using default interface naming scheme 'v255'. Jul 9 10:11:44.478908 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 10:11:44.481051 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 10:11:44.512917 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Jul 9 10:11:44.534919 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 10:11:44.537229 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 10:11:44.588705 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 10:11:44.591827 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 10:11:44.635705 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 9 10:11:44.638693 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 9 10:11:44.645235 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 10:11:44.650579 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 9 10:11:44.650613 kernel: GPT:9289727 != 19775487 Jul 9 10:11:44.650628 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 9 10:11:44.650637 kernel: GPT:9289727 != 19775487 Jul 9 10:11:44.650662 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 9 10:11:44.650683 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 10:11:44.645376 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 10:11:44.650495 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 10:11:44.653084 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 10:11:44.675190 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 9 10:11:44.676739 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 10:11:44.686056 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 10:11:44.693201 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 9 10:11:44.694512 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 9 10:11:44.708441 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 9 10:11:44.716060 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 10:11:44.717310 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 10:11:44.719499 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 10:11:44.721717 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 10:11:44.724444 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 10:11:44.726903 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 10:11:44.739379 disk-uuid[591]: Primary Header is updated. Jul 9 10:11:44.739379 disk-uuid[591]: Secondary Entries is updated. Jul 9 10:11:44.739379 disk-uuid[591]: Secondary Header is updated. Jul 9 10:11:44.742696 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 10:11:44.743233 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 10:11:45.754704 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 10:11:45.756981 disk-uuid[597]: The operation has completed successfully. Jul 9 10:11:45.775550 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 10:11:45.775647 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 10:11:45.803914 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 10:11:45.826517 sh[610]: Success Jul 9 10:11:45.842009 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 10:11:45.842044 kernel: device-mapper: uevent: version 1.0.3 Jul 9 10:11:45.845691 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 9 10:11:45.852698 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 9 10:11:45.878135 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 10:11:45.879920 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 10:11:45.894932 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 10:11:45.901543 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 9 10:11:45.901581 kernel: BTRFS: device fsid b890ad05-381e-41d5-a872-05bd1f9d6a23 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (622) Jul 9 10:11:45.903842 kernel: BTRFS info (device dm-0): first mount of filesystem b890ad05-381e-41d5-a872-05bd1f9d6a23 Jul 9 10:11:45.903868 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 9 10:11:45.903878 kernel: BTRFS info (device dm-0): using free-space-tree Jul 9 10:11:45.907803 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 10:11:45.909040 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 9 10:11:45.910448 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 10:11:45.911203 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 10:11:45.912747 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 10:11:45.938700 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (653) Jul 9 10:11:45.938738 kernel: BTRFS info (device vda6): first mount of filesystem ca4c1680-5eeb-49d9-a6a7-27565f55e2d5 Jul 9 10:11:45.938749 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 10:11:45.939700 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 10:11:45.945755 kernel: BTRFS info (device vda6): last unmount of filesystem ca4c1680-5eeb-49d9-a6a7-27565f55e2d5 Jul 9 10:11:45.947717 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 10:11:45.950490 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 10:11:46.015018 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 10:11:46.020210 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 10:11:46.066092 systemd-networkd[797]: lo: Link UP Jul 9 10:11:46.066106 systemd-networkd[797]: lo: Gained carrier Jul 9 10:11:46.066850 systemd-networkd[797]: Enumeration completed Jul 9 10:11:46.067282 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 10:11:46.067286 systemd-networkd[797]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 10:11:46.068113 systemd-networkd[797]: eth0: Link UP Jul 9 10:11:46.068116 systemd-networkd[797]: eth0: Gained carrier Jul 9 10:11:46.068125 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 10:11:46.068791 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 10:11:46.069928 systemd[1]: Reached target network.target - Network. Jul 9 10:11:46.090332 ignition[698]: Ignition 2.21.0 Jul 9 10:11:46.090346 ignition[698]: Stage: fetch-offline Jul 9 10:11:46.090379 ignition[698]: no configs at "/usr/lib/ignition/base.d" Jul 9 10:11:46.090387 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 10:11:46.092721 systemd-networkd[797]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 10:11:46.090567 ignition[698]: parsed url from cmdline: "" Jul 9 10:11:46.090570 ignition[698]: no config URL provided Jul 9 10:11:46.090575 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 10:11:46.090582 ignition[698]: no config at "/usr/lib/ignition/user.ign" Jul 9 10:11:46.090599 ignition[698]: op(1): [started] loading QEMU firmware config module Jul 9 10:11:46.090603 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 9 10:11:46.097646 ignition[698]: op(1): [finished] loading QEMU firmware config module Jul 9 10:11:46.134995 ignition[698]: parsing config with SHA512: 1256de9c6c0da0143ebf26c6d5a6f55bd515e223ece4baf01fe474064c85c22ceff3a882f4c7011cf219e171652f9ff3a7ec8a66debd8bd33ab2576599f3b3a6 Jul 9 10:11:46.140639 unknown[698]: fetched base config from "system" Jul 9 10:11:46.140662 unknown[698]: fetched user config from "qemu" Jul 9 10:11:46.141085 ignition[698]: fetch-offline: fetch-offline passed Jul 9 10:11:46.141139 ignition[698]: Ignition finished successfully Jul 9 10:11:46.144482 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 10:11:46.147255 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 9 10:11:46.148116 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 10:11:46.170259 ignition[813]: Ignition 2.21.0 Jul 9 10:11:46.170275 ignition[813]: Stage: kargs Jul 9 10:11:46.170403 ignition[813]: no configs at "/usr/lib/ignition/base.d" Jul 9 10:11:46.170412 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 10:11:46.172504 ignition[813]: kargs: kargs passed Jul 9 10:11:46.172561 ignition[813]: Ignition finished successfully Jul 9 10:11:46.176921 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 10:11:46.180825 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 10:11:46.203287 ignition[821]: Ignition 2.21.0 Jul 9 10:11:46.203306 ignition[821]: Stage: disks Jul 9 10:11:46.203435 ignition[821]: no configs at "/usr/lib/ignition/base.d" Jul 9 10:11:46.203444 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 10:11:46.205986 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 10:11:46.204178 ignition[821]: disks: disks passed Jul 9 10:11:46.207835 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 10:11:46.204222 ignition[821]: Ignition finished successfully Jul 9 10:11:46.210788 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 10:11:46.212518 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 10:11:46.214378 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 10:11:46.215959 systemd[1]: Reached target basic.target - Basic System. Jul 9 10:11:46.218592 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 10:11:46.258442 systemd-fsck[832]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 9 10:11:46.262727 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 10:11:46.265773 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 10:11:46.339699 kernel: EXT4-fs (vda9): mounted filesystem 83f4d40b-59ad-4dad-9ca3-9ab67909ff35 r/w with ordered data mode. Quota mode: none. Jul 9 10:11:46.340153 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 10:11:46.341460 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 10:11:46.344626 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 10:11:46.346988 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 10:11:46.348790 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 9 10:11:46.348839 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 10:11:46.348866 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 10:11:46.362253 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 10:11:46.364468 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 10:11:46.368699 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (840) Jul 9 10:11:46.371322 kernel: BTRFS info (device vda6): first mount of filesystem ca4c1680-5eeb-49d9-a6a7-27565f55e2d5 Jul 9 10:11:46.371357 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 10:11:46.371374 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 10:11:46.374760 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 10:11:46.408053 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 10:11:46.412581 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Jul 9 10:11:46.416664 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 10:11:46.419560 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 10:11:46.493739 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 10:11:46.495810 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 10:11:46.497482 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 10:11:46.519698 kernel: BTRFS info (device vda6): last unmount of filesystem ca4c1680-5eeb-49d9-a6a7-27565f55e2d5 Jul 9 10:11:46.538304 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 10:11:46.540258 ignition[954]: INFO : Ignition 2.21.0 Jul 9 10:11:46.540258 ignition[954]: INFO : Stage: mount Jul 9 10:11:46.540258 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 10:11:46.540258 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 10:11:46.544286 ignition[954]: INFO : mount: mount passed Jul 9 10:11:46.544286 ignition[954]: INFO : Ignition finished successfully Jul 9 10:11:46.543126 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 10:11:46.546255 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 10:11:46.900339 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 10:11:46.901944 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 10:11:46.925691 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (966) Jul 9 10:11:46.925723 kernel: BTRFS info (device vda6): first mount of filesystem ca4c1680-5eeb-49d9-a6a7-27565f55e2d5 Jul 9 10:11:46.927712 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 10:11:46.927738 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 10:11:46.930993 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 10:11:46.956158 ignition[983]: INFO : Ignition 2.21.0 Jul 9 10:11:46.956158 ignition[983]: INFO : Stage: files Jul 9 10:11:46.958365 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 10:11:46.958365 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 10:11:46.958365 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Jul 9 10:11:46.961876 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 10:11:46.961876 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 10:11:46.961876 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 10:11:46.961876 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 10:11:46.961876 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 10:11:46.961244 unknown[983]: wrote ssh authorized keys file for user: core Jul 9 10:11:46.969624 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 9 10:11:46.969624 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 9 10:11:47.062177 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 10:11:47.588092 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 9 10:11:47.588092 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 10:11:47.591853 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 9 10:11:47.803838 systemd-networkd[797]: eth0: Gained IPv6LL Jul 9 10:11:47.954146 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 9 10:11:48.023584 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 10:11:48.025604 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 9 10:11:48.025604 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 10:11:48.025604 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 10:11:48.025604 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 10:11:48.025604 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 10:11:48.025604 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 10:11:48.025604 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 10:11:48.025604 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 10:11:48.038878 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 10:11:48.038878 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 10:11:48.038878 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 10:11:48.038878 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 10:11:48.038878 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 10:11:48.038878 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 9 10:11:48.416920 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 9 10:11:48.603054 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 9 10:11:48.603054 ignition[983]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 9 10:11:48.606717 ignition[983]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 10:11:48.616789 ignition[983]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 10:11:48.616789 ignition[983]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 9 10:11:48.616789 ignition[983]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 9 10:11:48.616789 ignition[983]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 10:11:48.623839 ignition[983]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 10:11:48.623839 ignition[983]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 9 10:11:48.623839 ignition[983]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 9 10:11:48.640746 ignition[983]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 10:11:48.644562 ignition[983]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 10:11:48.646832 ignition[983]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 9 10:11:48.646832 ignition[983]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 9 10:11:48.646832 ignition[983]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 10:11:48.646832 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 10:11:48.646832 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 10:11:48.646832 ignition[983]: INFO : files: files passed Jul 9 10:11:48.646832 ignition[983]: INFO : Ignition finished successfully Jul 9 10:11:48.647512 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 10:11:48.650993 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 10:11:48.653829 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 10:11:48.668761 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 10:11:48.669899 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Jul 9 10:11:48.670199 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 10:11:48.674224 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 10:11:48.674224 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 10:11:48.677453 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 10:11:48.677919 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 10:11:48.680364 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 10:11:48.684196 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 10:11:48.729495 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 10:11:48.729609 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 10:11:48.731999 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 10:11:48.733992 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 10:11:48.735902 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 10:11:48.736709 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 10:11:48.771465 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 10:11:48.774237 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 10:11:48.797110 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 10:11:48.798404 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 10:11:48.800555 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 10:11:48.802460 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 10:11:48.802588 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 10:11:48.805170 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 10:11:48.807353 systemd[1]: Stopped target basic.target - Basic System. Jul 9 10:11:48.809115 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 10:11:48.811154 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 10:11:48.813162 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 10:11:48.815395 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 9 10:11:48.817438 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 10:11:48.819382 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 10:11:48.821476 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 10:11:48.826796 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 10:11:48.828819 systemd[1]: Stopped target swap.target - Swaps. Jul 9 10:11:48.830402 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 10:11:48.830535 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 10:11:48.833014 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 10:11:48.834978 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 10:11:48.837006 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 10:11:48.837147 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 10:11:48.839213 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 10:11:48.839334 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 10:11:48.842361 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 10:11:48.842478 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 10:11:48.844498 systemd[1]: Stopped target paths.target - Path Units. Jul 9 10:11:48.846208 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 10:11:48.849732 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 10:11:48.851416 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 10:11:48.853667 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 10:11:48.855390 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 10:11:48.855476 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 10:11:48.857243 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 10:11:48.857310 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 10:11:48.858984 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 10:11:48.859111 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 10:11:48.860968 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 10:11:48.861069 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 10:11:48.863576 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 10:11:48.866322 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 10:11:48.867545 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 10:11:48.867667 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 10:11:48.869638 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 10:11:48.869761 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 10:11:48.876500 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 10:11:48.876580 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 10:11:48.884189 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 10:11:48.892688 ignition[1038]: INFO : Ignition 2.21.0 Jul 9 10:11:48.892688 ignition[1038]: INFO : Stage: umount Jul 9 10:11:48.894503 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 10:11:48.894503 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 10:11:48.894503 ignition[1038]: INFO : umount: umount passed Jul 9 10:11:48.894503 ignition[1038]: INFO : Ignition finished successfully Jul 9 10:11:48.896510 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 10:11:48.896799 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 10:11:48.898082 systemd[1]: Stopped target network.target - Network. Jul 9 10:11:48.899806 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 10:11:48.899876 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 10:11:48.901846 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 10:11:48.901896 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 10:11:48.903852 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 10:11:48.903901 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 10:11:48.905772 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 10:11:48.905814 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 10:11:48.907769 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 10:11:48.909621 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 10:11:48.920557 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 10:11:48.920732 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 10:11:48.924859 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 10:11:48.925190 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 10:11:48.925228 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 10:11:48.928835 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 10:11:48.929049 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 10:11:48.929760 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 10:11:48.933354 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 10:11:48.933751 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 9 10:11:48.936149 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 10:11:48.936197 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 10:11:48.939286 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 10:11:48.941649 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 10:11:48.941742 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 10:11:48.945830 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 10:11:48.945886 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 10:11:48.948814 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 10:11:48.948862 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 10:11:48.950792 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 10:11:48.956534 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 10:11:48.961771 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 10:11:48.961897 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 10:11:48.964009 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 10:11:48.964051 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 10:11:48.967929 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 10:11:48.974857 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 10:11:48.976651 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 10:11:48.976732 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 10:11:48.978332 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 10:11:48.978365 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 10:11:48.980182 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 10:11:48.980238 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 10:11:48.982967 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 10:11:48.983019 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 10:11:48.985691 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 10:11:48.985748 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 10:11:48.989398 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 10:11:48.990459 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 9 10:11:48.990521 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 10:11:48.993702 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 10:11:48.993749 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 10:11:48.996873 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 9 10:11:48.996915 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 10:11:49.000285 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 10:11:49.000332 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 10:11:49.002626 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 10:11:49.002691 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 10:11:49.006594 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 10:11:49.008701 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 10:11:49.010391 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 10:11:49.010478 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 10:11:49.012906 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 10:11:49.014662 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 10:11:49.039414 systemd[1]: Switching root. Jul 9 10:11:49.073857 systemd-journald[244]: Journal stopped Jul 9 10:11:49.917863 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jul 9 10:11:49.917921 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 10:11:49.917936 kernel: SELinux: policy capability open_perms=1 Jul 9 10:11:49.917951 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 10:11:49.917963 kernel: SELinux: policy capability always_check_network=0 Jul 9 10:11:49.917976 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 10:11:49.917990 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 10:11:49.918000 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 10:11:49.918011 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 10:11:49.918021 kernel: SELinux: policy capability userspace_initial_context=0 Jul 9 10:11:49.918040 kernel: audit: type=1403 audit(1752055909.276:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 10:11:49.918051 systemd[1]: Successfully loaded SELinux policy in 62.291ms. Jul 9 10:11:49.918065 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.270ms. Jul 9 10:11:49.918077 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 10:11:49.918089 systemd[1]: Detected virtualization kvm. Jul 9 10:11:49.918102 systemd[1]: Detected architecture arm64. Jul 9 10:11:49.918113 systemd[1]: Detected first boot. Jul 9 10:11:49.918125 systemd[1]: Initializing machine ID from VM UUID. Jul 9 10:11:49.918135 zram_generator::config[1084]: No configuration found. Jul 9 10:11:49.918147 kernel: NET: Registered PF_VSOCK protocol family Jul 9 10:11:49.918158 systemd[1]: Populated /etc with preset unit settings. Jul 9 10:11:49.918170 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 10:11:49.918182 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 10:11:49.918195 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 10:11:49.918207 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 10:11:49.918223 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 10:11:49.918235 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 10:11:49.918246 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 10:11:49.918262 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 10:11:49.918274 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 10:11:49.918286 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 10:11:49.918297 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 10:11:49.918310 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 10:11:49.918322 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 10:11:49.918334 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 10:11:49.918346 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 10:11:49.918357 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 10:11:49.918370 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 10:11:49.918381 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 10:11:49.918394 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 9 10:11:49.918407 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 10:11:49.918419 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 10:11:49.918435 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 10:11:49.918448 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 10:11:49.918462 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 10:11:49.918495 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 10:11:49.918507 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 10:11:49.918519 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 10:11:49.918532 systemd[1]: Reached target slices.target - Slice Units. Jul 9 10:11:49.918544 systemd[1]: Reached target swap.target - Swaps. Jul 9 10:11:49.918556 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 10:11:49.918568 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 10:11:49.918579 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 10:11:49.918593 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 10:11:49.918606 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 10:11:49.918617 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 10:11:49.918628 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 10:11:49.918647 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 10:11:49.918661 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 10:11:49.918687 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 10:11:49.918700 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 10:11:49.918712 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 10:11:49.918724 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 10:11:49.918736 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 10:11:49.918748 systemd[1]: Reached target machines.target - Containers. Jul 9 10:11:49.918759 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 10:11:49.918772 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 10:11:49.918783 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 10:11:49.918795 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 10:11:49.918805 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 10:11:49.918816 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 10:11:49.918828 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 10:11:49.918840 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 10:11:49.918856 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 10:11:49.918867 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 10:11:49.918881 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 10:11:49.918892 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 10:11:49.918903 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 10:11:49.918915 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 10:11:49.918926 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 10:11:49.918937 kernel: fuse: init (API version 7.41) Jul 9 10:11:49.918949 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 10:11:49.918960 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 10:11:49.918973 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 10:11:49.918983 kernel: loop: module loaded Jul 9 10:11:49.918994 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 10:11:49.919005 kernel: ACPI: bus type drm_connector registered Jul 9 10:11:49.919016 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 10:11:49.919027 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 10:11:49.919041 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 10:11:49.919053 systemd[1]: Stopped verity-setup.service. Jul 9 10:11:49.919064 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 10:11:49.919076 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 10:11:49.919087 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 10:11:49.919098 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 10:11:49.919110 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 10:11:49.919121 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 10:11:49.919158 systemd-journald[1152]: Collecting audit messages is disabled. Jul 9 10:11:49.919184 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 10:11:49.919195 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 10:11:49.919207 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 10:11:49.919220 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 10:11:49.919232 systemd-journald[1152]: Journal started Jul 9 10:11:49.919255 systemd-journald[1152]: Runtime Journal (/run/log/journal/a9644dfd74544bd1b037dd98e3893b05) is 6M, max 48.5M, 42.4M free. Jul 9 10:11:49.657764 systemd[1]: Queued start job for default target multi-user.target. Jul 9 10:11:49.679905 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 9 10:11:49.680312 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 10:11:49.923198 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 10:11:49.924114 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 10:11:49.925740 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 10:11:49.927301 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 10:11:49.927472 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 10:11:49.928993 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 10:11:49.929839 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 10:11:49.931430 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 10:11:49.931705 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 10:11:49.933136 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 10:11:49.933314 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 10:11:49.934894 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 10:11:49.936441 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 10:11:49.938162 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 10:11:49.940066 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 10:11:49.951835 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 10:11:49.955470 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 10:11:49.958264 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 10:11:49.960565 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 10:11:49.961992 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 10:11:49.962051 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 10:11:49.964024 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 10:11:49.971519 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 10:11:49.972817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 10:11:49.974158 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 10:11:49.976290 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 10:11:49.977614 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 10:11:49.979824 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 10:11:49.981115 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 10:11:49.985996 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 10:11:49.988247 systemd-journald[1152]: Time spent on flushing to /var/log/journal/a9644dfd74544bd1b037dd98e3893b05 is 22.777ms for 887 entries. Jul 9 10:11:49.988247 systemd-journald[1152]: System Journal (/var/log/journal/a9644dfd74544bd1b037dd98e3893b05) is 8M, max 195.6M, 187.6M free. Jul 9 10:11:50.017601 systemd-journald[1152]: Received client request to flush runtime journal. Jul 9 10:11:50.017653 kernel: loop0: detected capacity change from 0 to 207008 Jul 9 10:11:49.989943 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 10:11:49.993452 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 10:11:49.997352 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 10:11:49.999475 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 10:11:50.002751 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 10:11:50.006149 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 10:11:50.010844 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 10:11:50.020599 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 10:11:50.023211 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 10:11:50.024860 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Jul 9 10:11:50.025100 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Jul 9 10:11:50.028369 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 10:11:50.030213 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 10:11:50.032166 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 10:11:50.041855 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 10:11:50.050765 kernel: loop1: detected capacity change from 0 to 105936 Jul 9 10:11:50.059551 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 10:11:50.062588 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 10:11:50.078708 kernel: loop2: detected capacity change from 0 to 134232 Jul 9 10:11:50.082821 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Jul 9 10:11:50.083111 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Jul 9 10:11:50.087707 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 10:11:50.119704 kernel: loop3: detected capacity change from 0 to 207008 Jul 9 10:11:50.125748 kernel: loop4: detected capacity change from 0 to 105936 Jul 9 10:11:50.130706 kernel: loop5: detected capacity change from 0 to 134232 Jul 9 10:11:50.136921 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 9 10:11:50.137375 (sd-merge)[1225]: Merged extensions into '/usr'. Jul 9 10:11:50.140962 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 10:11:50.140984 systemd[1]: Reloading... Jul 9 10:11:50.181050 zram_generator::config[1247]: No configuration found. Jul 9 10:11:50.260732 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 10:11:50.300475 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 10:11:50.367203 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 10:11:50.367417 systemd[1]: Reloading finished in 226 ms. Jul 9 10:11:50.396487 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 10:11:50.398025 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 10:11:50.424850 systemd[1]: Starting ensure-sysext.service... Jul 9 10:11:50.426680 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 10:11:50.440526 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 9 10:11:50.440560 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 9 10:11:50.440857 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 10:11:50.441052 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 10:11:50.441774 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 10:11:50.441994 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 9 10:11:50.442036 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 9 10:11:50.445407 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Jul 9 10:11:50.445422 systemd[1]: Reloading... Jul 9 10:11:50.452389 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 10:11:50.452399 systemd-tmpfiles[1286]: Skipping /boot Jul 9 10:11:50.460925 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 10:11:50.460932 systemd-tmpfiles[1286]: Skipping /boot Jul 9 10:11:50.487693 zram_generator::config[1313]: No configuration found. Jul 9 10:11:50.560613 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 10:11:50.628108 systemd[1]: Reloading finished in 182 ms. Jul 9 10:11:50.640700 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 10:11:50.647381 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 10:11:50.663770 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 10:11:50.666328 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 10:11:50.668888 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 10:11:50.673526 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 10:11:50.676811 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 10:11:50.682884 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 10:11:50.689414 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 10:11:50.690609 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 10:11:50.693079 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 10:11:50.708515 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 10:11:50.709788 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 10:11:50.709916 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 10:11:50.711811 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 10:11:50.715104 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 10:11:50.716957 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 10:11:50.717108 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 10:11:50.718976 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 10:11:50.723995 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 10:11:50.725751 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 10:11:50.725947 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 10:11:50.732156 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Jul 9 10:11:50.734392 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 10:11:50.736972 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 10:11:50.739248 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 10:11:50.747192 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 10:11:50.748625 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 10:11:50.748817 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 10:11:50.752133 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 10:11:50.755913 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 10:11:50.761807 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 10:11:50.766064 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 10:11:50.773582 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 10:11:50.774748 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 10:11:50.775171 augenrules[1402]: No rules Jul 9 10:11:50.776567 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 10:11:50.776801 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 10:11:50.779146 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 10:11:50.779295 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 10:11:50.782258 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 10:11:50.782422 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 10:11:50.784982 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 10:11:50.803451 systemd[1]: Finished ensure-sysext.service. Jul 9 10:11:50.805949 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 10:11:50.822669 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 10:11:50.829933 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 10:11:50.831856 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 10:11:50.831916 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 10:11:50.843044 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 10:11:50.844148 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 10:11:50.844227 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 10:11:50.847826 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 9 10:11:50.850840 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 10:11:50.853658 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 10:11:50.853893 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 10:11:50.890596 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 9 10:11:50.905186 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 10:11:50.910504 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 10:11:50.942893 systemd-resolved[1353]: Positive Trust Anchors: Jul 9 10:11:50.942909 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 10:11:50.942940 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 10:11:50.952507 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 10:11:50.955425 systemd-resolved[1353]: Defaulting to hostname 'linux'. Jul 9 10:11:50.966376 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 10:11:50.968233 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 10:11:50.989446 systemd-networkd[1436]: lo: Link UP Jul 9 10:11:50.989454 systemd-networkd[1436]: lo: Gained carrier Jul 9 10:11:50.990282 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 9 10:11:50.990399 systemd-networkd[1436]: Enumeration completed Jul 9 10:11:50.991799 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 10:11:50.993287 systemd[1]: Reached target network.target - Network. Jul 9 10:11:50.993806 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 10:11:50.993810 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 10:11:50.994441 systemd-networkd[1436]: eth0: Link UP Jul 9 10:11:50.994552 systemd-networkd[1436]: eth0: Gained carrier Jul 9 10:11:50.994573 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 10:11:50.994655 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 10:11:50.996726 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 10:11:50.998192 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 10:11:51.000185 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 10:11:51.001941 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 10:11:51.001970 systemd[1]: Reached target paths.target - Path Units. Jul 9 10:11:51.003229 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 10:11:51.004894 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 10:11:51.006612 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 10:11:51.007750 systemd-networkd[1436]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 10:11:51.007893 systemd[1]: Reached target timers.target - Timer Units. Jul 9 10:11:51.008845 systemd-timesyncd[1438]: Network configuration changed, trying to establish connection. Jul 9 10:11:51.011263 systemd-timesyncd[1438]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 9 10:11:51.011314 systemd-timesyncd[1438]: Initial clock synchronization to Wed 2025-07-09 10:11:51.265070 UTC. Jul 9 10:11:51.011447 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 10:11:51.014019 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 10:11:51.017072 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 10:11:51.018595 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 10:11:51.020134 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 10:11:51.024790 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 10:11:51.026749 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 10:11:51.029439 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 10:11:51.031657 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 10:11:51.033504 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 10:11:51.041388 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 10:11:51.042499 systemd[1]: Reached target basic.target - Basic System. Jul 9 10:11:51.043801 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 10:11:51.043831 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 10:11:51.044795 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 10:11:51.046863 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 10:11:51.048947 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 10:11:51.053730 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 10:11:51.055646 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 10:11:51.056686 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 10:11:51.057633 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 10:11:51.061779 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 10:11:51.061924 jq[1469]: false Jul 9 10:11:51.063888 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 10:11:51.066885 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 10:11:51.073816 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 10:11:51.075773 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 10:11:51.077808 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 10:11:51.078210 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 10:11:51.078966 extend-filesystems[1470]: Found /dev/vda6 Jul 9 10:11:51.079992 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 10:11:51.082023 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 10:11:51.085286 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 10:11:51.090050 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 10:11:51.092323 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 10:11:51.092508 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 10:11:51.092781 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 10:11:51.092901 extend-filesystems[1470]: Found /dev/vda9 Jul 9 10:11:51.092939 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 10:11:51.096136 extend-filesystems[1470]: Checking size of /dev/vda9 Jul 9 10:11:51.103066 jq[1487]: true Jul 9 10:11:51.104079 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 10:11:51.104266 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 10:11:51.120564 extend-filesystems[1470]: Resized partition /dev/vda9 Jul 9 10:11:51.123643 (ntainerd)[1498]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 10:11:51.150771 jq[1497]: true Jul 9 10:11:51.154253 extend-filesystems[1510]: resize2fs 1.47.2 (1-Jan-2025) Jul 9 10:11:51.163103 tar[1494]: linux-arm64/LICENSE Jul 9 10:11:51.163103 tar[1494]: linux-arm64/helm Jul 9 10:11:51.165826 dbus-daemon[1467]: [system] SELinux support is enabled Jul 9 10:11:51.166013 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 10:11:51.173183 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 10:11:51.173222 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 10:11:51.174633 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 10:11:51.174678 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 10:11:51.177748 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 9 10:11:51.206012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 10:11:51.206029 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (Power Button) Jul 9 10:11:51.206218 systemd-logind[1483]: New seat seat0. Jul 9 10:11:51.207573 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 10:11:51.231478 update_engine[1485]: I20250709 10:11:51.230589 1485 main.cc:92] Flatcar Update Engine starting Jul 9 10:11:51.233448 systemd[1]: Started update-engine.service - Update Engine. Jul 9 10:11:51.234099 update_engine[1485]: I20250709 10:11:51.233454 1485 update_check_scheduler.cc:74] Next update check in 3m48s Jul 9 10:11:51.238518 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 10:11:51.248706 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 9 10:11:51.260328 extend-filesystems[1510]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 9 10:11:51.260328 extend-filesystems[1510]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 9 10:11:51.260328 extend-filesystems[1510]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 9 10:11:51.270955 extend-filesystems[1470]: Resized filesystem in /dev/vda9 Jul 9 10:11:51.263005 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 10:11:51.263269 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 10:11:51.279928 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Jul 9 10:11:51.281455 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 10:11:51.283387 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 9 10:11:51.308150 locksmithd[1534]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 10:11:51.376853 sshd_keygen[1492]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 10:11:51.396066 containerd[1498]: time="2025-07-09T10:11:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 9 10:11:51.398265 containerd[1498]: time="2025-07-09T10:11:51.398221120Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 9 10:11:51.402314 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 10:11:51.406620 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 10:11:51.412883 containerd[1498]: time="2025-07-09T10:11:51.412790680Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="121µs" Jul 9 10:11:51.412949 containerd[1498]: time="2025-07-09T10:11:51.412873840Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 9 10:11:51.413042 containerd[1498]: time="2025-07-09T10:11:51.412979240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 9 10:11:51.413746 containerd[1498]: time="2025-07-09T10:11:51.413648840Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 9 10:11:51.413794 containerd[1498]: time="2025-07-09T10:11:51.413746720Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 9 10:11:51.413794 containerd[1498]: time="2025-07-09T10:11:51.413781560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 10:11:51.413868 containerd[1498]: time="2025-07-09T10:11:51.413843800Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 10:11:51.413868 containerd[1498]: time="2025-07-09T10:11:51.413862600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 10:11:51.414122 containerd[1498]: time="2025-07-09T10:11:51.414089680Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 10:11:51.414122 containerd[1498]: time="2025-07-09T10:11:51.414112920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 10:11:51.414167 containerd[1498]: time="2025-07-09T10:11:51.414124760Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 10:11:51.414167 containerd[1498]: time="2025-07-09T10:11:51.414133560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 9 10:11:51.414218 containerd[1498]: time="2025-07-09T10:11:51.414204600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 9 10:11:51.414420 containerd[1498]: time="2025-07-09T10:11:51.414401840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 10:11:51.414466 containerd[1498]: time="2025-07-09T10:11:51.414434880Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 10:11:51.414466 containerd[1498]: time="2025-07-09T10:11:51.414446000Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 9 10:11:51.414504 containerd[1498]: time="2025-07-09T10:11:51.414482400Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 9 10:11:51.414765 containerd[1498]: time="2025-07-09T10:11:51.414724480Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 9 10:11:51.414856 containerd[1498]: time="2025-07-09T10:11:51.414833560Z" level=info msg="metadata content store policy set" policy=shared Jul 9 10:11:51.418181 containerd[1498]: time="2025-07-09T10:11:51.418147640Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 9 10:11:51.418241 containerd[1498]: time="2025-07-09T10:11:51.418208920Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 9 10:11:51.418241 containerd[1498]: time="2025-07-09T10:11:51.418225920Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 9 10:11:51.418293 containerd[1498]: time="2025-07-09T10:11:51.418239400Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 9 10:11:51.418293 containerd[1498]: time="2025-07-09T10:11:51.418253800Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 9 10:11:51.418293 containerd[1498]: time="2025-07-09T10:11:51.418264800Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 9 10:11:51.418293 containerd[1498]: time="2025-07-09T10:11:51.418280440Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 9 10:11:51.418359 containerd[1498]: time="2025-07-09T10:11:51.418293520Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 9 10:11:51.418359 containerd[1498]: time="2025-07-09T10:11:51.418306320Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 9 10:11:51.418359 containerd[1498]: time="2025-07-09T10:11:51.418316680Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 9 10:11:51.418359 containerd[1498]: time="2025-07-09T10:11:51.418326600Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 9 10:11:51.418359 containerd[1498]: time="2025-07-09T10:11:51.418339640Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 9 10:11:51.418554 containerd[1498]: time="2025-07-09T10:11:51.418463960Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 9 10:11:51.418554 containerd[1498]: time="2025-07-09T10:11:51.418498440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 9 10:11:51.418554 containerd[1498]: time="2025-07-09T10:11:51.418514960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 9 10:11:51.418554 containerd[1498]: time="2025-07-09T10:11:51.418526840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 9 10:11:51.418554 containerd[1498]: time="2025-07-09T10:11:51.418542600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 9 10:11:51.418554 containerd[1498]: time="2025-07-09T10:11:51.418553040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 9 10:11:51.418840 containerd[1498]: time="2025-07-09T10:11:51.418565280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 9 10:11:51.418840 containerd[1498]: time="2025-07-09T10:11:51.418577200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 9 10:11:51.418840 containerd[1498]: time="2025-07-09T10:11:51.418589240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 9 10:11:51.418840 containerd[1498]: time="2025-07-09T10:11:51.418600440Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 9 10:11:51.418840 containerd[1498]: time="2025-07-09T10:11:51.418610720Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 9 10:11:51.418840 containerd[1498]: time="2025-07-09T10:11:51.418828240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 9 10:11:51.418951 containerd[1498]: time="2025-07-09T10:11:51.418844920Z" level=info msg="Start snapshots syncer" Jul 9 10:11:51.418951 containerd[1498]: time="2025-07-09T10:11:51.418866680Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 9 10:11:51.420332 containerd[1498]: time="2025-07-09T10:11:51.419076000Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 9 10:11:51.420463 containerd[1498]: time="2025-07-09T10:11:51.420358080Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 9 10:11:51.420490 containerd[1498]: time="2025-07-09T10:11:51.420461480Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 9 10:11:51.420666 containerd[1498]: time="2025-07-09T10:11:51.420594160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 9 10:11:51.420666 containerd[1498]: time="2025-07-09T10:11:51.420646320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 9 10:11:51.420666 containerd[1498]: time="2025-07-09T10:11:51.420664400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 9 10:11:51.420666 containerd[1498]: time="2025-07-09T10:11:51.420693840Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 9 10:11:51.420666 containerd[1498]: time="2025-07-09T10:11:51.420716560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 9 10:11:51.420992 containerd[1498]: time="2025-07-09T10:11:51.420733960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 9 10:11:51.420992 containerd[1498]: time="2025-07-09T10:11:51.420749640Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 9 10:11:51.420992 containerd[1498]: time="2025-07-09T10:11:51.420780240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 9 10:11:51.420992 containerd[1498]: time="2025-07-09T10:11:51.420796680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 9 10:11:51.420992 containerd[1498]: time="2025-07-09T10:11:51.420811880Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 9 10:11:51.421345 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 10:11:51.421539 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 10:11:51.423875 containerd[1498]: time="2025-07-09T10:11:51.423834360Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 10:11:51.423943 containerd[1498]: time="2025-07-09T10:11:51.423881320Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 10:11:51.423943 containerd[1498]: time="2025-07-09T10:11:51.423892080Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 10:11:51.423943 containerd[1498]: time="2025-07-09T10:11:51.423901920Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 10:11:51.423943 containerd[1498]: time="2025-07-09T10:11:51.423910400Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 9 10:11:51.423943 containerd[1498]: time="2025-07-09T10:11:51.423920880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 9 10:11:51.423943 containerd[1498]: time="2025-07-09T10:11:51.423934120Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 9 10:11:51.424054 containerd[1498]: time="2025-07-09T10:11:51.424014160Z" level=info msg="runtime interface created" Jul 9 10:11:51.424054 containerd[1498]: time="2025-07-09T10:11:51.424019880Z" level=info msg="created NRI interface" Jul 9 10:11:51.424054 containerd[1498]: time="2025-07-09T10:11:51.424028600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 9 10:11:51.424054 containerd[1498]: time="2025-07-09T10:11:51.424043560Z" level=info msg="Connect containerd service" Jul 9 10:11:51.424176 containerd[1498]: time="2025-07-09T10:11:51.424125040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 10:11:51.424881 containerd[1498]: time="2025-07-09T10:11:51.424848000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 10:11:51.425081 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 10:11:51.452001 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 10:11:51.455546 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 10:11:51.457620 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 9 10:11:51.460982 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 10:11:51.495301 tar[1494]: linux-arm64/README.md Jul 9 10:11:51.511783 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 10:11:51.527055 containerd[1498]: time="2025-07-09T10:11:51.526942640Z" level=info msg="Start subscribing containerd event" Jul 9 10:11:51.527055 containerd[1498]: time="2025-07-09T10:11:51.527034440Z" level=info msg="Start recovering state" Jul 9 10:11:51.527180 containerd[1498]: time="2025-07-09T10:11:51.527125360Z" level=info msg="Start event monitor" Jul 9 10:11:51.527180 containerd[1498]: time="2025-07-09T10:11:51.527139880Z" level=info msg="Start cni network conf syncer for default" Jul 9 10:11:51.527180 containerd[1498]: time="2025-07-09T10:11:51.527148520Z" level=info msg="Start streaming server" Jul 9 10:11:51.527180 containerd[1498]: time="2025-07-09T10:11:51.527157040Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 9 10:11:51.527180 containerd[1498]: time="2025-07-09T10:11:51.527164120Z" level=info msg="runtime interface starting up..." Jul 9 10:11:51.527180 containerd[1498]: time="2025-07-09T10:11:51.527169680Z" level=info msg="starting plugins..." Jul 9 10:11:51.527315 containerd[1498]: time="2025-07-09T10:11:51.527183000Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 9 10:11:51.527315 containerd[1498]: time="2025-07-09T10:11:51.527289920Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 10:11:51.527350 containerd[1498]: time="2025-07-09T10:11:51.527334880Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 10:11:51.527510 containerd[1498]: time="2025-07-09T10:11:51.527495720Z" level=info msg="containerd successfully booted in 0.131785s" Jul 9 10:11:51.527646 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 10:11:52.733157 systemd-networkd[1436]: eth0: Gained IPv6LL Jul 9 10:11:52.735945 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 10:11:52.737678 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 10:11:52.740161 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 9 10:11:52.742665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:11:52.753990 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 10:11:52.768660 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 9 10:11:52.768922 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 9 10:11:52.770637 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 10:11:52.774568 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 10:11:53.309927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:11:53.311568 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 10:11:53.312920 systemd[1]: Startup finished in 2.153s (kernel) + 5.620s (initrd) + 4.102s (userspace) = 11.876s. Jul 9 10:11:53.313504 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 10:11:53.723484 kubelet[1607]: E0709 10:11:53.723366 1607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 10:11:53.725748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 10:11:53.725893 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 10:11:53.726234 systemd[1]: kubelet.service: Consumed 801ms CPU time, 257.5M memory peak. Jul 9 10:11:57.015151 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 10:11:57.016462 systemd[1]: Started sshd@0-10.0.0.140:22-10.0.0.1:37200.service - OpenSSH per-connection server daemon (10.0.0.1:37200). Jul 9 10:11:57.116386 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 37200 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:11:57.117713 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:11:57.123374 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 10:11:57.124217 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 10:11:57.130290 systemd-logind[1483]: New session 1 of user core. Jul 9 10:11:57.146997 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 10:11:57.149314 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 10:11:57.165394 (systemd)[1626]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 10:11:57.167351 systemd-logind[1483]: New session c1 of user core. Jul 9 10:11:57.278541 systemd[1626]: Queued start job for default target default.target. Jul 9 10:11:57.298753 systemd[1626]: Created slice app.slice - User Application Slice. Jul 9 10:11:57.298778 systemd[1626]: Reached target paths.target - Paths. Jul 9 10:11:57.298813 systemd[1626]: Reached target timers.target - Timers. Jul 9 10:11:57.300038 systemd[1626]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 10:11:57.309028 systemd[1626]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 10:11:57.309086 systemd[1626]: Reached target sockets.target - Sockets. Jul 9 10:11:57.309124 systemd[1626]: Reached target basic.target - Basic System. Jul 9 10:11:57.309158 systemd[1626]: Reached target default.target - Main User Target. Jul 9 10:11:57.309184 systemd[1626]: Startup finished in 136ms. Jul 9 10:11:57.309310 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 10:11:57.310609 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 10:11:57.375963 systemd[1]: Started sshd@1-10.0.0.140:22-10.0.0.1:37212.service - OpenSSH per-connection server daemon (10.0.0.1:37212). Jul 9 10:11:57.430322 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 37212 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:11:57.431403 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:11:57.435771 systemd-logind[1483]: New session 2 of user core. Jul 9 10:11:57.451840 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 10:11:57.503906 sshd[1641]: Connection closed by 10.0.0.1 port 37212 Jul 9 10:11:57.504287 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Jul 9 10:11:57.513810 systemd[1]: sshd@1-10.0.0.140:22-10.0.0.1:37212.service: Deactivated successfully. Jul 9 10:11:57.515146 systemd[1]: session-2.scope: Deactivated successfully. Jul 9 10:11:57.517843 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. Jul 9 10:11:57.519079 systemd[1]: Started sshd@2-10.0.0.140:22-10.0.0.1:37224.service - OpenSSH per-connection server daemon (10.0.0.1:37224). Jul 9 10:11:57.520933 systemd-logind[1483]: Removed session 2. Jul 9 10:11:57.568578 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 37224 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:11:57.569623 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:11:57.574406 systemd-logind[1483]: New session 3 of user core. Jul 9 10:11:57.584840 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 10:11:57.633791 sshd[1650]: Connection closed by 10.0.0.1 port 37224 Jul 9 10:11:57.633648 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Jul 9 10:11:57.650720 systemd[1]: sshd@2-10.0.0.140:22-10.0.0.1:37224.service: Deactivated successfully. Jul 9 10:11:57.652160 systemd[1]: session-3.scope: Deactivated successfully. Jul 9 10:11:57.653543 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. Jul 9 10:11:57.654709 systemd[1]: Started sshd@3-10.0.0.140:22-10.0.0.1:37232.service - OpenSSH per-connection server daemon (10.0.0.1:37232). Jul 9 10:11:57.655510 systemd-logind[1483]: Removed session 3. Jul 9 10:11:57.709748 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 37232 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:11:57.711119 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:11:57.714977 systemd-logind[1483]: New session 4 of user core. Jul 9 10:11:57.733886 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 10:11:57.785718 sshd[1659]: Connection closed by 10.0.0.1 port 37232 Jul 9 10:11:57.786639 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Jul 9 10:11:57.793595 systemd[1]: sshd@3-10.0.0.140:22-10.0.0.1:37232.service: Deactivated successfully. Jul 9 10:11:57.796088 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 10:11:57.797901 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. Jul 9 10:11:57.800287 systemd[1]: Started sshd@4-10.0.0.140:22-10.0.0.1:37236.service - OpenSSH per-connection server daemon (10.0.0.1:37236). Jul 9 10:11:57.800837 systemd-logind[1483]: Removed session 4. Jul 9 10:11:57.857969 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 37236 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:11:57.859162 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:11:57.863634 systemd-logind[1483]: New session 5 of user core. Jul 9 10:11:57.886880 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 10:11:57.950252 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 10:11:57.950533 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 10:11:57.977698 sudo[1669]: pam_unix(sudo:session): session closed for user root Jul 9 10:11:57.979254 sshd[1668]: Connection closed by 10.0.0.1 port 37236 Jul 9 10:11:57.979607 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Jul 9 10:11:58.001635 systemd[1]: sshd@4-10.0.0.140:22-10.0.0.1:37236.service: Deactivated successfully. Jul 9 10:11:58.004142 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 10:11:58.005180 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. Jul 9 10:11:58.008213 systemd[1]: Started sshd@5-10.0.0.140:22-10.0.0.1:37248.service - OpenSSH per-connection server daemon (10.0.0.1:37248). Jul 9 10:11:58.008650 systemd-logind[1483]: Removed session 5. Jul 9 10:11:58.060001 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 37248 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:11:58.061188 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:11:58.065554 systemd-logind[1483]: New session 6 of user core. Jul 9 10:11:58.086869 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 10:11:58.138472 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 10:11:58.138768 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 10:11:58.209208 sudo[1680]: pam_unix(sudo:session): session closed for user root Jul 9 10:11:58.214228 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 10:11:58.214506 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 10:11:58.224954 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 10:11:58.252437 augenrules[1702]: No rules Jul 9 10:11:58.253939 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 10:11:58.254780 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 10:11:58.256198 sudo[1679]: pam_unix(sudo:session): session closed for user root Jul 9 10:11:58.257305 sshd[1678]: Connection closed by 10.0.0.1 port 37248 Jul 9 10:11:58.257830 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Jul 9 10:11:58.269700 systemd[1]: sshd@5-10.0.0.140:22-10.0.0.1:37248.service: Deactivated successfully. Jul 9 10:11:58.271097 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 10:11:58.271776 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Jul 9 10:11:58.274276 systemd[1]: Started sshd@6-10.0.0.140:22-10.0.0.1:37260.service - OpenSSH per-connection server daemon (10.0.0.1:37260). Jul 9 10:11:58.275128 systemd-logind[1483]: Removed session 6. Jul 9 10:11:58.341831 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 37260 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:11:58.342923 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:11:58.346776 systemd-logind[1483]: New session 7 of user core. Jul 9 10:11:58.355840 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 10:11:58.406065 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 10:11:58.406346 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 10:11:58.766566 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 10:11:58.776007 (dockerd)[1736]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 10:11:59.027593 dockerd[1736]: time="2025-07-09T10:11:59.027464571Z" level=info msg="Starting up" Jul 9 10:11:59.028717 dockerd[1736]: time="2025-07-09T10:11:59.028633723Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 9 10:11:59.038497 dockerd[1736]: time="2025-07-09T10:11:59.038464248Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 9 10:11:59.054703 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2316686547-merged.mount: Deactivated successfully. Jul 9 10:11:59.072559 dockerd[1736]: time="2025-07-09T10:11:59.072360196Z" level=info msg="Loading containers: start." Jul 9 10:11:59.082713 kernel: Initializing XFRM netlink socket Jul 9 10:11:59.288329 systemd-networkd[1436]: docker0: Link UP Jul 9 10:11:59.291869 dockerd[1736]: time="2025-07-09T10:11:59.291823392Z" level=info msg="Loading containers: done." Jul 9 10:11:59.308745 dockerd[1736]: time="2025-07-09T10:11:59.308367556Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 10:11:59.308745 dockerd[1736]: time="2025-07-09T10:11:59.308475242Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 9 10:11:59.308745 dockerd[1736]: time="2025-07-09T10:11:59.308557697Z" level=info msg="Initializing buildkit" Jul 9 10:11:59.331799 dockerd[1736]: time="2025-07-09T10:11:59.331763419Z" level=info msg="Completed buildkit initialization" Jul 9 10:11:59.336646 dockerd[1736]: time="2025-07-09T10:11:59.336585737Z" level=info msg="Daemon has completed initialization" Jul 9 10:11:59.336774 dockerd[1736]: time="2025-07-09T10:11:59.336662887Z" level=info msg="API listen on /run/docker.sock" Jul 9 10:11:59.336778 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 10:11:59.903388 containerd[1498]: time="2025-07-09T10:11:59.903331795Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 9 10:12:00.050755 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1016082492-merged.mount: Deactivated successfully. Jul 9 10:12:00.418534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1154983163.mount: Deactivated successfully. Jul 9 10:12:01.358618 containerd[1498]: time="2025-07-09T10:12:01.358565739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:01.359538 containerd[1498]: time="2025-07-09T10:12:01.359270036Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 9 10:12:01.360276 containerd[1498]: time="2025-07-09T10:12:01.360240205Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:01.364316 containerd[1498]: time="2025-07-09T10:12:01.364278008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:01.365241 containerd[1498]: time="2025-07-09T10:12:01.365213853Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.461839231s" Jul 9 10:12:01.365241 containerd[1498]: time="2025-07-09T10:12:01.365241959Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 9 10:12:01.365938 containerd[1498]: time="2025-07-09T10:12:01.365913627Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 9 10:12:02.427635 containerd[1498]: time="2025-07-09T10:12:02.427582551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:02.428285 containerd[1498]: time="2025-07-09T10:12:02.428239631Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 9 10:12:02.428920 containerd[1498]: time="2025-07-09T10:12:02.428873923Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:02.431656 containerd[1498]: time="2025-07-09T10:12:02.431623005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:02.432759 containerd[1498]: time="2025-07-09T10:12:02.432725574Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.066778723s" Jul 9 10:12:02.432801 containerd[1498]: time="2025-07-09T10:12:02.432763044Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 9 10:12:02.433221 containerd[1498]: time="2025-07-09T10:12:02.433183929Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 9 10:12:03.511981 containerd[1498]: time="2025-07-09T10:12:03.511934044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:03.515395 containerd[1498]: time="2025-07-09T10:12:03.515348124Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 9 10:12:03.516284 containerd[1498]: time="2025-07-09T10:12:03.516241886Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:03.521060 containerd[1498]: time="2025-07-09T10:12:03.521018729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:03.522833 containerd[1498]: time="2025-07-09T10:12:03.522784777Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.089557901s" Jul 9 10:12:03.522877 containerd[1498]: time="2025-07-09T10:12:03.522832241Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 9 10:12:03.523332 containerd[1498]: time="2025-07-09T10:12:03.523286657Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 9 10:12:03.976360 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 10:12:03.978528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:12:04.156408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:12:04.174086 (kubelet)[2025]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 10:12:04.271239 kubelet[2025]: E0709 10:12:04.271043 2025 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 10:12:04.275088 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 10:12:04.275225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 10:12:04.275738 systemd[1]: kubelet.service: Consumed 152ms CPU time, 107.5M memory peak. Jul 9 10:12:04.524843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount319048304.mount: Deactivated successfully. Jul 9 10:12:04.966661 containerd[1498]: time="2025-07-09T10:12:04.966525993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:04.967525 containerd[1498]: time="2025-07-09T10:12:04.967350993Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 9 10:12:04.968203 containerd[1498]: time="2025-07-09T10:12:04.968169995Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:04.970411 containerd[1498]: time="2025-07-09T10:12:04.970348954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:04.971422 containerd[1498]: time="2025-07-09T10:12:04.971141991Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.447728463s" Jul 9 10:12:04.971422 containerd[1498]: time="2025-07-09T10:12:04.971177215Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 9 10:12:04.971839 containerd[1498]: time="2025-07-09T10:12:04.971812369Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 9 10:12:05.513627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount637152417.mount: Deactivated successfully. Jul 9 10:12:06.202046 containerd[1498]: time="2025-07-09T10:12:06.201987505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:06.202611 containerd[1498]: time="2025-07-09T10:12:06.202565601Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 9 10:12:06.203480 containerd[1498]: time="2025-07-09T10:12:06.203445968Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:06.206546 containerd[1498]: time="2025-07-09T10:12:06.206508523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:06.207169 containerd[1498]: time="2025-07-09T10:12:06.207100044Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.235172635s" Jul 9 10:12:06.207169 containerd[1498]: time="2025-07-09T10:12:06.207138029Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 9 10:12:06.207687 containerd[1498]: time="2025-07-09T10:12:06.207649158Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 10:12:06.632209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2359586143.mount: Deactivated successfully. Jul 9 10:12:06.637728 containerd[1498]: time="2025-07-09T10:12:06.637664916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 10:12:06.638199 containerd[1498]: time="2025-07-09T10:12:06.638167403Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 9 10:12:06.639200 containerd[1498]: time="2025-07-09T10:12:06.639138091Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 10:12:06.640939 containerd[1498]: time="2025-07-09T10:12:06.640906423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 10:12:06.641785 containerd[1498]: time="2025-07-09T10:12:06.641749448Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 434.049724ms" Jul 9 10:12:06.641845 containerd[1498]: time="2025-07-09T10:12:06.641786026Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 9 10:12:06.642372 containerd[1498]: time="2025-07-09T10:12:06.642302864Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 9 10:12:07.144485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4255263027.mount: Deactivated successfully. Jul 9 10:12:08.558688 containerd[1498]: time="2025-07-09T10:12:08.558620861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:08.559692 containerd[1498]: time="2025-07-09T10:12:08.559235194Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 9 10:12:08.560298 containerd[1498]: time="2025-07-09T10:12:08.560272145Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:08.563020 containerd[1498]: time="2025-07-09T10:12:08.562965158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:08.564223 containerd[1498]: time="2025-07-09T10:12:08.564048442Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.921715119s" Jul 9 10:12:08.564223 containerd[1498]: time="2025-07-09T10:12:08.564084255Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 9 10:12:14.008565 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:12:14.008748 systemd[1]: kubelet.service: Consumed 152ms CPU time, 107.5M memory peak. Jul 9 10:12:14.010642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:12:14.031632 systemd[1]: Reload requested from client PID 2181 ('systemctl') (unit session-7.scope)... Jul 9 10:12:14.031648 systemd[1]: Reloading... Jul 9 10:12:14.103711 zram_generator::config[2223]: No configuration found. Jul 9 10:12:14.274605 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 10:12:14.362148 systemd[1]: Reloading finished in 330 ms. Jul 9 10:12:14.432295 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 9 10:12:14.432383 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 9 10:12:14.432661 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:12:14.432735 systemd[1]: kubelet.service: Consumed 89ms CPU time, 95.1M memory peak. Jul 9 10:12:14.434280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:12:14.548066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:12:14.551641 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 10:12:14.587119 kubelet[2268]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 10:12:14.587119 kubelet[2268]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 10:12:14.587119 kubelet[2268]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 10:12:14.587448 kubelet[2268]: I0709 10:12:14.587167 2268 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 10:12:15.542396 kubelet[2268]: I0709 10:12:15.541874 2268 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 9 10:12:15.542396 kubelet[2268]: I0709 10:12:15.541910 2268 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 10:12:15.542396 kubelet[2268]: I0709 10:12:15.542327 2268 server.go:954] "Client rotation is on, will bootstrap in background" Jul 9 10:12:15.615204 kubelet[2268]: E0709 10:12:15.615155 2268 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Jul 9 10:12:15.619178 kubelet[2268]: I0709 10:12:15.619139 2268 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 10:12:15.627346 kubelet[2268]: I0709 10:12:15.627311 2268 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 10:12:15.630133 kubelet[2268]: I0709 10:12:15.630107 2268 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 10:12:15.630345 kubelet[2268]: I0709 10:12:15.630319 2268 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 10:12:15.630503 kubelet[2268]: I0709 10:12:15.630346 2268 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 10:12:15.630604 kubelet[2268]: I0709 10:12:15.630581 2268 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 10:12:15.630604 kubelet[2268]: I0709 10:12:15.630589 2268 container_manager_linux.go:304] "Creating device plugin manager" Jul 9 10:12:15.630809 kubelet[2268]: I0709 10:12:15.630794 2268 state_mem.go:36] "Initialized new in-memory state store" Jul 9 10:12:15.633144 kubelet[2268]: I0709 10:12:15.633123 2268 kubelet.go:446] "Attempting to sync node with API server" Jul 9 10:12:15.633144 kubelet[2268]: I0709 10:12:15.633146 2268 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 10:12:15.633202 kubelet[2268]: I0709 10:12:15.633167 2268 kubelet.go:352] "Adding apiserver pod source" Jul 9 10:12:15.633202 kubelet[2268]: I0709 10:12:15.633181 2268 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 10:12:15.635781 kubelet[2268]: I0709 10:12:15.635756 2268 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 9 10:12:15.637253 kubelet[2268]: I0709 10:12:15.637230 2268 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 10:12:15.637606 kubelet[2268]: W0709 10:12:15.637592 2268 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 10:12:15.639345 kubelet[2268]: I0709 10:12:15.639318 2268 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 10:12:15.639454 kubelet[2268]: I0709 10:12:15.639445 2268 server.go:1287] "Started kubelet" Jul 9 10:12:15.641259 kubelet[2268]: W0709 10:12:15.640963 2268 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Jul 9 10:12:15.641259 kubelet[2268]: E0709 10:12:15.641021 2268 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Jul 9 10:12:15.641259 kubelet[2268]: I0709 10:12:15.641164 2268 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 10:12:15.646093 kubelet[2268]: I0709 10:12:15.646036 2268 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 10:12:15.646920 kubelet[2268]: I0709 10:12:15.646885 2268 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 10:12:15.647731 kubelet[2268]: I0709 10:12:15.647474 2268 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 10:12:15.647731 kubelet[2268]: I0709 10:12:15.646882 2268 server.go:479] "Adding debug handlers to kubelet server" Jul 9 10:12:15.648019 kubelet[2268]: W0709 10:12:15.646347 2268 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Jul 9 10:12:15.648062 kubelet[2268]: E0709 10:12:15.648042 2268 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Jul 9 10:12:15.648424 kubelet[2268]: I0709 10:12:15.646765 2268 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 10:12:15.649250 kubelet[2268]: I0709 10:12:15.649232 2268 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 10:12:15.649605 kubelet[2268]: I0709 10:12:15.649588 2268 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 10:12:15.650712 kubelet[2268]: I0709 10:12:15.649651 2268 reconciler.go:26] "Reconciler: start to sync state" Jul 9 10:12:15.650712 kubelet[2268]: E0709 10:12:15.650126 2268 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 10:12:15.658473 kubelet[2268]: I0709 10:12:15.657975 2268 factory.go:221] Registration of the systemd container factory successfully Jul 9 10:12:15.658473 kubelet[2268]: W0709 10:12:15.658427 2268 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Jul 9 10:12:15.658473 kubelet[2268]: E0709 10:12:15.658468 2268 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Jul 9 10:12:15.658587 kubelet[2268]: I0709 10:12:15.658536 2268 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 10:12:15.659109 kubelet[2268]: E0709 10:12:15.658864 2268 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.140:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.140:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18508d973666f252 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-09 10:12:15.639417426 +0000 UTC m=+1.084901370,LastTimestamp:2025-07-09 10:12:15.639417426 +0000 UTC m=+1.084901370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 9 10:12:15.659212 kubelet[2268]: E0709 10:12:15.659157 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="200ms" Jul 9 10:12:15.660403 kubelet[2268]: E0709 10:12:15.660309 2268 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 10:12:15.660477 kubelet[2268]: I0709 10:12:15.660437 2268 factory.go:221] Registration of the containerd container factory successfully Jul 9 10:12:15.667863 kubelet[2268]: I0709 10:12:15.667840 2268 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 10:12:15.667863 kubelet[2268]: I0709 10:12:15.667857 2268 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 10:12:15.668005 kubelet[2268]: I0709 10:12:15.667876 2268 state_mem.go:36] "Initialized new in-memory state store" Jul 9 10:12:15.748333 kubelet[2268]: I0709 10:12:15.747912 2268 policy_none.go:49] "None policy: Start" Jul 9 10:12:15.748333 kubelet[2268]: I0709 10:12:15.747959 2268 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 10:12:15.748333 kubelet[2268]: I0709 10:12:15.747972 2268 state_mem.go:35] "Initializing new in-memory state store" Jul 9 10:12:15.750786 kubelet[2268]: E0709 10:12:15.750751 2268 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 10:12:15.751464 kubelet[2268]: I0709 10:12:15.751438 2268 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 10:12:15.752968 kubelet[2268]: I0709 10:12:15.752945 2268 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 10:12:15.753092 kubelet[2268]: I0709 10:12:15.753078 2268 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 9 10:12:15.753173 kubelet[2268]: I0709 10:12:15.753156 2268 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 10:12:15.753749 kubelet[2268]: I0709 10:12:15.753720 2268 kubelet.go:2382] "Starting kubelet main sync loop" Jul 9 10:12:15.753885 kubelet[2268]: E0709 10:12:15.753867 2268 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 10:12:15.754550 kubelet[2268]: W0709 10:12:15.754506 2268 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Jul 9 10:12:15.754735 kubelet[2268]: E0709 10:12:15.754654 2268 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Jul 9 10:12:15.754755 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 10:12:15.766487 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 10:12:15.769463 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 10:12:15.785702 kubelet[2268]: I0709 10:12:15.785545 2268 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 10:12:15.785820 kubelet[2268]: I0709 10:12:15.785800 2268 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 10:12:15.785851 kubelet[2268]: I0709 10:12:15.785818 2268 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 10:12:15.786080 kubelet[2268]: I0709 10:12:15.786067 2268 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 10:12:15.787561 kubelet[2268]: E0709 10:12:15.787534 2268 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 10:12:15.787717 kubelet[2268]: E0709 10:12:15.787668 2268 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 9 10:12:15.859589 kubelet[2268]: E0709 10:12:15.859474 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="400ms" Jul 9 10:12:15.862766 systemd[1]: Created slice kubepods-burstable-pod220a83b0fb59a90889496b949033f316.slice - libcontainer container kubepods-burstable-pod220a83b0fb59a90889496b949033f316.slice. Jul 9 10:12:15.886199 kubelet[2268]: E0709 10:12:15.886013 2268 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:12:15.888839 kubelet[2268]: I0709 10:12:15.888721 2268 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 10:12:15.889207 kubelet[2268]: E0709 10:12:15.889169 2268 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Jul 9 10:12:15.889347 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 9 10:12:15.891071 kubelet[2268]: E0709 10:12:15.891043 2268 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:12:15.893715 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 9 10:12:15.895194 kubelet[2268]: E0709 10:12:15.895175 2268 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:12:16.051012 kubelet[2268]: I0709 10:12:16.050922 2268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/220a83b0fb59a90889496b949033f316-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"220a83b0fb59a90889496b949033f316\") " pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:16.051012 kubelet[2268]: I0709 10:12:16.050964 2268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:16.051012 kubelet[2268]: I0709 10:12:16.050984 2268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:16.051179 kubelet[2268]: I0709 10:12:16.051031 2268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:16.051179 kubelet[2268]: I0709 10:12:16.051090 2268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 9 10:12:16.051179 kubelet[2268]: I0709 10:12:16.051127 2268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/220a83b0fb59a90889496b949033f316-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"220a83b0fb59a90889496b949033f316\") " pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:16.051179 kubelet[2268]: I0709 10:12:16.051160 2268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:16.051260 kubelet[2268]: I0709 10:12:16.051184 2268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:16.051260 kubelet[2268]: I0709 10:12:16.051212 2268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/220a83b0fb59a90889496b949033f316-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"220a83b0fb59a90889496b949033f316\") " pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:16.090204 kubelet[2268]: I0709 10:12:16.090182 2268 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 10:12:16.090512 kubelet[2268]: E0709 10:12:16.090487 2268 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Jul 9 10:12:16.190019 containerd[1498]: time="2025-07-09T10:12:16.189832200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:220a83b0fb59a90889496b949033f316,Namespace:kube-system,Attempt:0,}" Jul 9 10:12:16.192441 containerd[1498]: time="2025-07-09T10:12:16.192410755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 9 10:12:16.196163 containerd[1498]: time="2025-07-09T10:12:16.196134704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 9 10:12:16.218467 containerd[1498]: time="2025-07-09T10:12:16.218428810Z" level=info msg="connecting to shim c92f565bd21dcdca84f67d9b7fa410a7d1065b795a7c1c4f7ad84913b9462e42" address="unix:///run/containerd/s/8b22c07ec635f94ac27d23844b1fc10823e743318d54e22cc8d9e506d8c4691d" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:12:16.219896 containerd[1498]: time="2025-07-09T10:12:16.219835139Z" level=info msg="connecting to shim 2d1efd087a0eeb0b9caf67a23b8643c14df4697f0c22fe5a4a506f4a592baf24" address="unix:///run/containerd/s/1ed88fd4bc6a9a5aa09e23faf23993bca07eb81bfc484f994d66e118b91630d6" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:12:16.234232 containerd[1498]: time="2025-07-09T10:12:16.234192640Z" level=info msg="connecting to shim 24076187b4eb8c9e937bf2e09a81eaf274c86c2d835d794202be87466058d2e3" address="unix:///run/containerd/s/d50ae62419bb22dcf87d2b7c5d7ba868d563eecec3123cf5aec14729e4c5c590" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:12:16.245048 systemd[1]: Started cri-containerd-c92f565bd21dcdca84f67d9b7fa410a7d1065b795a7c1c4f7ad84913b9462e42.scope - libcontainer container c92f565bd21dcdca84f67d9b7fa410a7d1065b795a7c1c4f7ad84913b9462e42. Jul 9 10:12:16.250493 systemd[1]: Started cri-containerd-2d1efd087a0eeb0b9caf67a23b8643c14df4697f0c22fe5a4a506f4a592baf24.scope - libcontainer container 2d1efd087a0eeb0b9caf67a23b8643c14df4697f0c22fe5a4a506f4a592baf24. Jul 9 10:12:16.254035 systemd[1]: Started cri-containerd-24076187b4eb8c9e937bf2e09a81eaf274c86c2d835d794202be87466058d2e3.scope - libcontainer container 24076187b4eb8c9e937bf2e09a81eaf274c86c2d835d794202be87466058d2e3. Jul 9 10:12:16.260567 kubelet[2268]: E0709 10:12:16.260481 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="800ms" Jul 9 10:12:16.288373 containerd[1498]: time="2025-07-09T10:12:16.288329411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c92f565bd21dcdca84f67d9b7fa410a7d1065b795a7c1c4f7ad84913b9462e42\"" Jul 9 10:12:16.292346 containerd[1498]: time="2025-07-09T10:12:16.292300357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"24076187b4eb8c9e937bf2e09a81eaf274c86c2d835d794202be87466058d2e3\"" Jul 9 10:12:16.293211 containerd[1498]: time="2025-07-09T10:12:16.293163346Z" level=info msg="CreateContainer within sandbox \"c92f565bd21dcdca84f67d9b7fa410a7d1065b795a7c1c4f7ad84913b9462e42\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 10:12:16.294809 containerd[1498]: time="2025-07-09T10:12:16.294772495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:220a83b0fb59a90889496b949033f316,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d1efd087a0eeb0b9caf67a23b8643c14df4697f0c22fe5a4a506f4a592baf24\"" Jul 9 10:12:16.294960 containerd[1498]: time="2025-07-09T10:12:16.294926413Z" level=info msg="CreateContainer within sandbox \"24076187b4eb8c9e937bf2e09a81eaf274c86c2d835d794202be87466058d2e3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 10:12:16.296771 containerd[1498]: time="2025-07-09T10:12:16.296744150Z" level=info msg="CreateContainer within sandbox \"2d1efd087a0eeb0b9caf67a23b8643c14df4697f0c22fe5a4a506f4a592baf24\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 10:12:16.302754 containerd[1498]: time="2025-07-09T10:12:16.302723919Z" level=info msg="Container 8647d6564d5ed0906f55d5b61380d4ae11cd702458c5aeb22e1f12495b75286d: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:16.304824 containerd[1498]: time="2025-07-09T10:12:16.304792619Z" level=info msg="Container e2a2f6eb1fd384d9a51b25fb0a9772eef8484bdb92f2c417826fc890dafdd397: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:16.306710 containerd[1498]: time="2025-07-09T10:12:16.306663545Z" level=info msg="Container 03f0b3d9eed3c0b69bff4a7e6bd761425ae2caa37d601e4e016669c947dd3174: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:16.310967 containerd[1498]: time="2025-07-09T10:12:16.310874199Z" level=info msg="CreateContainer within sandbox \"c92f565bd21dcdca84f67d9b7fa410a7d1065b795a7c1c4f7ad84913b9462e42\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8647d6564d5ed0906f55d5b61380d4ae11cd702458c5aeb22e1f12495b75286d\"" Jul 9 10:12:16.311715 containerd[1498]: time="2025-07-09T10:12:16.311444453Z" level=info msg="StartContainer for \"8647d6564d5ed0906f55d5b61380d4ae11cd702458c5aeb22e1f12495b75286d\"" Jul 9 10:12:16.312976 containerd[1498]: time="2025-07-09T10:12:16.312946864Z" level=info msg="connecting to shim 8647d6564d5ed0906f55d5b61380d4ae11cd702458c5aeb22e1f12495b75286d" address="unix:///run/containerd/s/8b22c07ec635f94ac27d23844b1fc10823e743318d54e22cc8d9e506d8c4691d" protocol=ttrpc version=3 Jul 9 10:12:16.313342 containerd[1498]: time="2025-07-09T10:12:16.313249333Z" level=info msg="CreateContainer within sandbox \"24076187b4eb8c9e937bf2e09a81eaf274c86c2d835d794202be87466058d2e3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e2a2f6eb1fd384d9a51b25fb0a9772eef8484bdb92f2c417826fc890dafdd397\"" Jul 9 10:12:16.313590 containerd[1498]: time="2025-07-09T10:12:16.313563898Z" level=info msg="StartContainer for \"e2a2f6eb1fd384d9a51b25fb0a9772eef8484bdb92f2c417826fc890dafdd397\"" Jul 9 10:12:16.314515 containerd[1498]: time="2025-07-09T10:12:16.314490369Z" level=info msg="connecting to shim e2a2f6eb1fd384d9a51b25fb0a9772eef8484bdb92f2c417826fc890dafdd397" address="unix:///run/containerd/s/d50ae62419bb22dcf87d2b7c5d7ba868d563eecec3123cf5aec14729e4c5c590" protocol=ttrpc version=3 Jul 9 10:12:16.315502 containerd[1498]: time="2025-07-09T10:12:16.315457813Z" level=info msg="CreateContainer within sandbox \"2d1efd087a0eeb0b9caf67a23b8643c14df4697f0c22fe5a4a506f4a592baf24\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"03f0b3d9eed3c0b69bff4a7e6bd761425ae2caa37d601e4e016669c947dd3174\"" Jul 9 10:12:16.315936 containerd[1498]: time="2025-07-09T10:12:16.315910275Z" level=info msg="StartContainer for \"03f0b3d9eed3c0b69bff4a7e6bd761425ae2caa37d601e4e016669c947dd3174\"" Jul 9 10:12:16.317162 containerd[1498]: time="2025-07-09T10:12:16.317122794Z" level=info msg="connecting to shim 03f0b3d9eed3c0b69bff4a7e6bd761425ae2caa37d601e4e016669c947dd3174" address="unix:///run/containerd/s/1ed88fd4bc6a9a5aa09e23faf23993bca07eb81bfc484f994d66e118b91630d6" protocol=ttrpc version=3 Jul 9 10:12:16.329834 systemd[1]: Started cri-containerd-8647d6564d5ed0906f55d5b61380d4ae11cd702458c5aeb22e1f12495b75286d.scope - libcontainer container 8647d6564d5ed0906f55d5b61380d4ae11cd702458c5aeb22e1f12495b75286d. Jul 9 10:12:16.333530 systemd[1]: Started cri-containerd-03f0b3d9eed3c0b69bff4a7e6bd761425ae2caa37d601e4e016669c947dd3174.scope - libcontainer container 03f0b3d9eed3c0b69bff4a7e6bd761425ae2caa37d601e4e016669c947dd3174. Jul 9 10:12:16.334410 systemd[1]: Started cri-containerd-e2a2f6eb1fd384d9a51b25fb0a9772eef8484bdb92f2c417826fc890dafdd397.scope - libcontainer container e2a2f6eb1fd384d9a51b25fb0a9772eef8484bdb92f2c417826fc890dafdd397. Jul 9 10:12:16.379046 containerd[1498]: time="2025-07-09T10:12:16.378381282Z" level=info msg="StartContainer for \"8647d6564d5ed0906f55d5b61380d4ae11cd702458c5aeb22e1f12495b75286d\" returns successfully" Jul 9 10:12:16.401152 containerd[1498]: time="2025-07-09T10:12:16.398613017Z" level=info msg="StartContainer for \"03f0b3d9eed3c0b69bff4a7e6bd761425ae2caa37d601e4e016669c947dd3174\" returns successfully" Jul 9 10:12:16.412005 containerd[1498]: time="2025-07-09T10:12:16.411380513Z" level=info msg="StartContainer for \"e2a2f6eb1fd384d9a51b25fb0a9772eef8484bdb92f2c417826fc890dafdd397\" returns successfully" Jul 9 10:12:16.493246 kubelet[2268]: I0709 10:12:16.493146 2268 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 10:12:16.493533 kubelet[2268]: E0709 10:12:16.493509 2268 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Jul 9 10:12:16.761894 kubelet[2268]: E0709 10:12:16.761790 2268 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:12:16.765386 kubelet[2268]: E0709 10:12:16.765362 2268 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:12:16.766328 kubelet[2268]: E0709 10:12:16.766309 2268 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:12:17.295012 kubelet[2268]: I0709 10:12:17.294980 2268 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 10:12:17.767793 kubelet[2268]: E0709 10:12:17.767670 2268 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:12:17.768075 kubelet[2268]: E0709 10:12:17.767855 2268 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:12:18.058708 kubelet[2268]: E0709 10:12:18.058582 2268 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 9 10:12:18.141434 kubelet[2268]: I0709 10:12:18.140792 2268 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 10:12:18.150592 kubelet[2268]: I0709 10:12:18.150567 2268 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 10:12:18.161447 kubelet[2268]: E0709 10:12:18.161416 2268 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 9 10:12:18.161447 kubelet[2268]: I0709 10:12:18.161443 2268 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:18.163223 kubelet[2268]: E0709 10:12:18.163198 2268 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:18.163223 kubelet[2268]: I0709 10:12:18.163221 2268 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:18.165356 kubelet[2268]: E0709 10:12:18.165319 2268 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:18.636962 kubelet[2268]: I0709 10:12:18.636878 2268 apiserver.go:52] "Watching apiserver" Jul 9 10:12:18.650574 kubelet[2268]: I0709 10:12:18.650537 2268 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 10:12:19.059621 kubelet[2268]: I0709 10:12:19.059456 2268 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:20.008842 systemd[1]: Reload requested from client PID 2544 ('systemctl') (unit session-7.scope)... Jul 9 10:12:20.008858 systemd[1]: Reloading... Jul 9 10:12:20.073708 zram_generator::config[2587]: No configuration found. Jul 9 10:12:20.147372 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 10:12:20.250162 systemd[1]: Reloading finished in 241 ms. Jul 9 10:12:20.276742 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:12:20.290560 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 10:12:20.290856 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:12:20.290913 systemd[1]: kubelet.service: Consumed 1.552s CPU time, 129.5M memory peak. Jul 9 10:12:20.292816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:12:20.432783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:12:20.450626 (kubelet)[2629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 10:12:20.484795 kubelet[2629]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 10:12:20.484795 kubelet[2629]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 10:12:20.484795 kubelet[2629]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 10:12:20.485170 kubelet[2629]: I0709 10:12:20.484841 2629 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 10:12:20.490161 kubelet[2629]: I0709 10:12:20.490124 2629 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 9 10:12:20.490161 kubelet[2629]: I0709 10:12:20.490153 2629 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 10:12:20.490403 kubelet[2629]: I0709 10:12:20.490374 2629 server.go:954] "Client rotation is on, will bootstrap in background" Jul 9 10:12:20.491608 kubelet[2629]: I0709 10:12:20.491568 2629 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 9 10:12:20.493746 kubelet[2629]: I0709 10:12:20.493706 2629 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 10:12:20.499177 kubelet[2629]: I0709 10:12:20.499136 2629 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 10:12:20.501859 kubelet[2629]: I0709 10:12:20.501804 2629 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 10:12:20.502051 kubelet[2629]: I0709 10:12:20.502016 2629 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 10:12:20.502196 kubelet[2629]: I0709 10:12:20.502040 2629 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 10:12:20.502276 kubelet[2629]: I0709 10:12:20.502198 2629 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 10:12:20.502276 kubelet[2629]: I0709 10:12:20.502208 2629 container_manager_linux.go:304] "Creating device plugin manager" Jul 9 10:12:20.502276 kubelet[2629]: I0709 10:12:20.502257 2629 state_mem.go:36] "Initialized new in-memory state store" Jul 9 10:12:20.502386 kubelet[2629]: I0709 10:12:20.502376 2629 kubelet.go:446] "Attempting to sync node with API server" Jul 9 10:12:20.502417 kubelet[2629]: I0709 10:12:20.502391 2629 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 10:12:20.502417 kubelet[2629]: I0709 10:12:20.502410 2629 kubelet.go:352] "Adding apiserver pod source" Jul 9 10:12:20.502475 kubelet[2629]: I0709 10:12:20.502419 2629 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 10:12:20.502871 kubelet[2629]: I0709 10:12:20.502845 2629 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 9 10:12:20.503406 kubelet[2629]: I0709 10:12:20.503261 2629 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 10:12:20.503766 kubelet[2629]: I0709 10:12:20.503744 2629 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 10:12:20.503815 kubelet[2629]: I0709 10:12:20.503790 2629 server.go:1287] "Started kubelet" Jul 9 10:12:20.504892 kubelet[2629]: I0709 10:12:20.504844 2629 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 10:12:20.505693 kubelet[2629]: I0709 10:12:20.505281 2629 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 10:12:20.505693 kubelet[2629]: I0709 10:12:20.505578 2629 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 10:12:20.506303 kubelet[2629]: I0709 10:12:20.506195 2629 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 10:12:20.507283 kubelet[2629]: I0709 10:12:20.507265 2629 server.go:479] "Adding debug handlers to kubelet server" Jul 9 10:12:20.509588 kubelet[2629]: I0709 10:12:20.509537 2629 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 10:12:20.509815 kubelet[2629]: I0709 10:12:20.509794 2629 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 10:12:20.510014 kubelet[2629]: E0709 10:12:20.509984 2629 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 10:12:20.510753 kubelet[2629]: I0709 10:12:20.510244 2629 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 10:12:20.510753 kubelet[2629]: I0709 10:12:20.510430 2629 reconciler.go:26] "Reconciler: start to sync state" Jul 9 10:12:20.519765 kubelet[2629]: E0709 10:12:20.517742 2629 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 10:12:20.519765 kubelet[2629]: I0709 10:12:20.518382 2629 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 10:12:20.520763 kubelet[2629]: I0709 10:12:20.520005 2629 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 10:12:20.520763 kubelet[2629]: I0709 10:12:20.520028 2629 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 9 10:12:20.520763 kubelet[2629]: I0709 10:12:20.520046 2629 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 10:12:20.520763 kubelet[2629]: I0709 10:12:20.520052 2629 kubelet.go:2382] "Starting kubelet main sync loop" Jul 9 10:12:20.520763 kubelet[2629]: E0709 10:12:20.520104 2629 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 10:12:20.523602 kubelet[2629]: I0709 10:12:20.523571 2629 factory.go:221] Registration of the containerd container factory successfully Jul 9 10:12:20.523602 kubelet[2629]: I0709 10:12:20.523601 2629 factory.go:221] Registration of the systemd container factory successfully Jul 9 10:12:20.524684 kubelet[2629]: I0709 10:12:20.523713 2629 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 10:12:20.561392 kubelet[2629]: I0709 10:12:20.560201 2629 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 10:12:20.561392 kubelet[2629]: I0709 10:12:20.560933 2629 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 10:12:20.561392 kubelet[2629]: I0709 10:12:20.560964 2629 state_mem.go:36] "Initialized new in-memory state store" Jul 9 10:12:20.561392 kubelet[2629]: I0709 10:12:20.561131 2629 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 10:12:20.561392 kubelet[2629]: I0709 10:12:20.561142 2629 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 10:12:20.561392 kubelet[2629]: I0709 10:12:20.561159 2629 policy_none.go:49] "None policy: Start" Jul 9 10:12:20.561392 kubelet[2629]: I0709 10:12:20.561168 2629 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 10:12:20.561392 kubelet[2629]: I0709 10:12:20.561177 2629 state_mem.go:35] "Initializing new in-memory state store" Jul 9 10:12:20.561392 kubelet[2629]: I0709 10:12:20.561262 2629 state_mem.go:75] "Updated machine memory state" Jul 9 10:12:20.564652 kubelet[2629]: I0709 10:12:20.564630 2629 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 10:12:20.565039 kubelet[2629]: I0709 10:12:20.565021 2629 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 10:12:20.565152 kubelet[2629]: I0709 10:12:20.565121 2629 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 10:12:20.565494 kubelet[2629]: I0709 10:12:20.565474 2629 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 10:12:20.567857 kubelet[2629]: E0709 10:12:20.567798 2629 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 10:12:20.621359 kubelet[2629]: I0709 10:12:20.621329 2629 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 10:12:20.621601 kubelet[2629]: I0709 10:12:20.621339 2629 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:20.621685 kubelet[2629]: I0709 10:12:20.621483 2629 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:20.627282 kubelet[2629]: E0709 10:12:20.627243 2629 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:20.666571 kubelet[2629]: I0709 10:12:20.666534 2629 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 10:12:20.671935 kubelet[2629]: I0709 10:12:20.671902 2629 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 9 10:12:20.672003 kubelet[2629]: I0709 10:12:20.671977 2629 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 10:12:20.811626 kubelet[2629]: I0709 10:12:20.811512 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:20.811626 kubelet[2629]: I0709 10:12:20.811556 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:20.811626 kubelet[2629]: I0709 10:12:20.811577 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:20.811626 kubelet[2629]: I0709 10:12:20.811595 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 9 10:12:20.811626 kubelet[2629]: I0709 10:12:20.811613 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/220a83b0fb59a90889496b949033f316-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"220a83b0fb59a90889496b949033f316\") " pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:20.812095 kubelet[2629]: I0709 10:12:20.811628 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/220a83b0fb59a90889496b949033f316-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"220a83b0fb59a90889496b949033f316\") " pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:20.812095 kubelet[2629]: I0709 10:12:20.811642 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/220a83b0fb59a90889496b949033f316-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"220a83b0fb59a90889496b949033f316\") " pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:20.812095 kubelet[2629]: I0709 10:12:20.811657 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:20.812095 kubelet[2629]: I0709 10:12:20.811690 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:12:21.010358 sudo[2663]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 9 10:12:21.010615 sudo[2663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 9 10:12:21.324316 sudo[2663]: pam_unix(sudo:session): session closed for user root Jul 9 10:12:21.503015 kubelet[2629]: I0709 10:12:21.502974 2629 apiserver.go:52] "Watching apiserver" Jul 9 10:12:21.510862 kubelet[2629]: I0709 10:12:21.510826 2629 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 10:12:21.546918 kubelet[2629]: I0709 10:12:21.546888 2629 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 10:12:21.547160 kubelet[2629]: I0709 10:12:21.547134 2629 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:21.552333 kubelet[2629]: E0709 10:12:21.552270 2629 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 9 10:12:21.553603 kubelet[2629]: E0709 10:12:21.553560 2629 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 9 10:12:21.570608 kubelet[2629]: I0709 10:12:21.570254 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.570240789 podStartE2EDuration="1.570240789s" podCreationTimestamp="2025-07-09 10:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:12:21.563202905 +0000 UTC m=+1.109351566" watchObservedRunningTime="2025-07-09 10:12:21.570240789 +0000 UTC m=+1.116389450" Jul 9 10:12:21.577931 kubelet[2629]: I0709 10:12:21.577822 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.577809064 podStartE2EDuration="1.577809064s" podCreationTimestamp="2025-07-09 10:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:12:21.570729472 +0000 UTC m=+1.116878133" watchObservedRunningTime="2025-07-09 10:12:21.577809064 +0000 UTC m=+1.123957765" Jul 9 10:12:21.589314 kubelet[2629]: I0709 10:12:21.589255 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.589194898 podStartE2EDuration="2.589194898s" podCreationTimestamp="2025-07-09 10:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:12:21.578241469 +0000 UTC m=+1.124390130" watchObservedRunningTime="2025-07-09 10:12:21.589194898 +0000 UTC m=+1.135343559" Jul 9 10:12:23.154881 sudo[1715]: pam_unix(sudo:session): session closed for user root Jul 9 10:12:23.155957 sshd[1714]: Connection closed by 10.0.0.1 port 37260 Jul 9 10:12:23.156400 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Jul 9 10:12:23.159991 systemd[1]: sshd@6-10.0.0.140:22-10.0.0.1:37260.service: Deactivated successfully. Jul 9 10:12:23.162451 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 10:12:23.162720 systemd[1]: session-7.scope: Consumed 7.863s CPU time, 259.2M memory peak. Jul 9 10:12:23.163884 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Jul 9 10:12:23.164908 systemd-logind[1483]: Removed session 7. Jul 9 10:12:27.112778 kubelet[2629]: I0709 10:12:27.112739 2629 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 10:12:27.113186 containerd[1498]: time="2025-07-09T10:12:27.113117327Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 10:12:27.113375 kubelet[2629]: I0709 10:12:27.113282 2629 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 10:12:28.031272 systemd[1]: Created slice kubepods-besteffort-poda9018838_c668_40b8_b487_a5a735acc3a8.slice - libcontainer container kubepods-besteffort-poda9018838_c668_40b8_b487_a5a735acc3a8.slice. Jul 9 10:12:28.058174 kubelet[2629]: I0709 10:12:28.058124 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-bpf-maps\") pod \"cilium-p8vzj\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " pod="kube-system/cilium-p8vzj" Jul 9 10:12:28.058301 kubelet[2629]: I0709 10:12:28.058187 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-host-proc-sys-net\") pod \"cilium-p8vzj\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " pod="kube-system/cilium-p8vzj" Jul 9 10:12:28.058301 kubelet[2629]: I0709 10:12:28.058208 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-hubble-tls\") pod \"cilium-p8vzj\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " pod="kube-system/cilium-p8vzj" Jul 9 10:12:28.058301 kubelet[2629]: I0709 10:12:28.058226 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9ldf\" (UniqueName: \"kubernetes.io/projected/a9018838-c668-40b8-b487-a5a735acc3a8-kube-api-access-l9ldf\") pod \"kube-proxy-pd2t7\" (UID: \"a9018838-c668-40b8-b487-a5a735acc3a8\") " pod="kube-system/kube-proxy-pd2t7" Jul 9 10:12:28.058301 kubelet[2629]: I0709 10:12:28.058256 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-hostproc\") pod \"cilium-p8vzj\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " pod="kube-system/cilium-p8vzj" Jul 9 10:12:28.058301 kubelet[2629]: I0709 10:12:28.058270 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-cni-path\") pod \"cilium-p8vzj\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " pod="kube-system/cilium-p8vzj" Jul 9 10:12:28.058301 kubelet[2629]: I0709 10:12:28.058286 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-host-proc-sys-kernel\") pod \"cilium-p8vzj\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " pod="kube-system/cilium-p8vzj" Jul 9 10:12:28.058442 kubelet[2629]: I0709 10:12:28.058325 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9018838-c668-40b8-b487-a5a735acc3a8-xtables-lock\") pod \"kube-proxy-pd2t7\" (UID: \"a9018838-c668-40b8-b487-a5a735acc3a8\") " pod="kube-system/kube-proxy-pd2t7" Jul 9 10:12:28.058442 kubelet[2629]: I0709 10:12:28.058365 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-clustermesh-secrets\") pod \"cilium-p8vzj\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " pod="kube-system/cilium-p8vzj" Jul 9 10:12:28.058442 kubelet[2629]: I0709 10:12:28.058382 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-cilium-config-path\") pod \"cilium-p8vzj\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " pod="kube-system/cilium-p8vzj" Jul 9 10:12:28.058442 kubelet[2629]: I0709 10:12:28.058410 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-lib-modules\") pod \"cilium-p8vzj\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " pod="kube-system/cilium-p8vzj" Jul 9 10:12:28.058442 kubelet[2629]: I0709 10:12:28.058426 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpmm4\" (UniqueName: \"kubernetes.io/projected/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-kube-api-access-fpmm4\") pod \"cilium-p8vzj\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " pod="kube-system/cilium-p8vzj" Jul 9 10:12:28.058546 kubelet[2629]: I0709 10:12:28.058446 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-cilium-cgroup\") pod \"cilium-p8vzj\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " pod="kube-system/cilium-p8vzj" Jul 9 10:12:28.058546 kubelet[2629]: I0709 10:12:28.058462 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-etc-cni-netd\") pod \"cilium-p8vzj\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " pod="kube-system/cilium-p8vzj" Jul 9 10:12:28.058546 kubelet[2629]: I0709 10:12:28.058478 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-xtables-lock\") pod \"cilium-p8vzj\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " pod="kube-system/cilium-p8vzj" Jul 9 10:12:28.058546 kubelet[2629]: I0709 10:12:28.058496 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9018838-c668-40b8-b487-a5a735acc3a8-lib-modules\") pod \"kube-proxy-pd2t7\" (UID: \"a9018838-c668-40b8-b487-a5a735acc3a8\") " pod="kube-system/kube-proxy-pd2t7" Jul 9 10:12:28.058546 kubelet[2629]: I0709 10:12:28.058512 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a9018838-c668-40b8-b487-a5a735acc3a8-kube-proxy\") pod \"kube-proxy-pd2t7\" (UID: \"a9018838-c668-40b8-b487-a5a735acc3a8\") " pod="kube-system/kube-proxy-pd2t7" Jul 9 10:12:28.058546 kubelet[2629]: I0709 10:12:28.058527 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-cilium-run\") pod \"cilium-p8vzj\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " pod="kube-system/cilium-p8vzj" Jul 9 10:12:28.060269 systemd[1]: Created slice kubepods-burstable-pod83e22362_d70b_4c48_bb8a_b6b0210d1ef7.slice - libcontainer container kubepods-burstable-pod83e22362_d70b_4c48_bb8a_b6b0210d1ef7.slice. Jul 9 10:12:28.126207 systemd[1]: Created slice kubepods-besteffort-pod8f9a4a42_1dcf_4eac_a285_40b91c0f6177.slice - libcontainer container kubepods-besteffort-pod8f9a4a42_1dcf_4eac_a285_40b91c0f6177.slice. Jul 9 10:12:28.159861 kubelet[2629]: I0709 10:12:28.159814 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f9a4a42-1dcf-4eac-a285-40b91c0f6177-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-8ff6p\" (UID: \"8f9a4a42-1dcf-4eac-a285-40b91c0f6177\") " pod="kube-system/cilium-operator-6c4d7847fc-8ff6p" Jul 9 10:12:28.160199 kubelet[2629]: I0709 10:12:28.160010 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6t78\" (UniqueName: \"kubernetes.io/projected/8f9a4a42-1dcf-4eac-a285-40b91c0f6177-kube-api-access-j6t78\") pod \"cilium-operator-6c4d7847fc-8ff6p\" (UID: \"8f9a4a42-1dcf-4eac-a285-40b91c0f6177\") " pod="kube-system/cilium-operator-6c4d7847fc-8ff6p" Jul 9 10:12:28.355260 containerd[1498]: time="2025-07-09T10:12:28.355149499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pd2t7,Uid:a9018838-c668-40b8-b487-a5a735acc3a8,Namespace:kube-system,Attempt:0,}" Jul 9 10:12:28.364031 containerd[1498]: time="2025-07-09T10:12:28.363991623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p8vzj,Uid:83e22362-d70b-4c48-bb8a-b6b0210d1ef7,Namespace:kube-system,Attempt:0,}" Jul 9 10:12:28.378126 containerd[1498]: time="2025-07-09T10:12:28.378052380Z" level=info msg="connecting to shim 8b6bc9e78b5654d88c55644f0a3d5f51a75022b4d401f165e9773826c41cf6c3" address="unix:///run/containerd/s/ce4875bf3fe22b55379a37d3d5b8593af891de07f6e543804299adfc7be5f7d0" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:12:28.390358 containerd[1498]: time="2025-07-09T10:12:28.390312297Z" level=info msg="connecting to shim f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e" address="unix:///run/containerd/s/8bb884bc310f0866e0d3549f30d61e580d15d6af48cc4db394f9c1aa7f4fe0df" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:12:28.400883 systemd[1]: Started cri-containerd-8b6bc9e78b5654d88c55644f0a3d5f51a75022b4d401f165e9773826c41cf6c3.scope - libcontainer container 8b6bc9e78b5654d88c55644f0a3d5f51a75022b4d401f165e9773826c41cf6c3. Jul 9 10:12:28.410467 systemd[1]: Started cri-containerd-f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e.scope - libcontainer container f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e. Jul 9 10:12:28.432283 containerd[1498]: time="2025-07-09T10:12:28.432244491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8ff6p,Uid:8f9a4a42-1dcf-4eac-a285-40b91c0f6177,Namespace:kube-system,Attempt:0,}" Jul 9 10:12:28.436858 containerd[1498]: time="2025-07-09T10:12:28.436817504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pd2t7,Uid:a9018838-c668-40b8-b487-a5a735acc3a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b6bc9e78b5654d88c55644f0a3d5f51a75022b4d401f165e9773826c41cf6c3\"" Jul 9 10:12:28.437398 containerd[1498]: time="2025-07-09T10:12:28.437371802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p8vzj,Uid:83e22362-d70b-4c48-bb8a-b6b0210d1ef7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e\"" Jul 9 10:12:28.440869 containerd[1498]: time="2025-07-09T10:12:28.440831736Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 9 10:12:28.442576 containerd[1498]: time="2025-07-09T10:12:28.441689055Z" level=info msg="CreateContainer within sandbox \"8b6bc9e78b5654d88c55644f0a3d5f51a75022b4d401f165e9773826c41cf6c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 10:12:28.456622 containerd[1498]: time="2025-07-09T10:12:28.456570795Z" level=info msg="Container a8638aa50fb47acaa44c3120dd7453a27a4830397361890d7357573d5531328c: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:28.461155 containerd[1498]: time="2025-07-09T10:12:28.461112353Z" level=info msg="connecting to shim db20edd656d324ce4dce9ecdcde53db9a935e762ebeef48e159e2603ec1cf4ef" address="unix:///run/containerd/s/c8d2bf795304001d7af7b4e04e8de88c7d964bc08262f616b5ad6777ffc43e1a" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:12:28.463428 containerd[1498]: time="2025-07-09T10:12:28.463388334Z" level=info msg="CreateContainer within sandbox \"8b6bc9e78b5654d88c55644f0a3d5f51a75022b4d401f165e9773826c41cf6c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a8638aa50fb47acaa44c3120dd7453a27a4830397361890d7357573d5531328c\"" Jul 9 10:12:28.464269 containerd[1498]: time="2025-07-09T10:12:28.464224204Z" level=info msg="StartContainer for \"a8638aa50fb47acaa44c3120dd7453a27a4830397361890d7357573d5531328c\"" Jul 9 10:12:28.469431 containerd[1498]: time="2025-07-09T10:12:28.467972512Z" level=info msg="connecting to shim a8638aa50fb47acaa44c3120dd7453a27a4830397361890d7357573d5531328c" address="unix:///run/containerd/s/ce4875bf3fe22b55379a37d3d5b8593af891de07f6e543804299adfc7be5f7d0" protocol=ttrpc version=3 Jul 9 10:12:28.495883 systemd[1]: Started cri-containerd-a8638aa50fb47acaa44c3120dd7453a27a4830397361890d7357573d5531328c.scope - libcontainer container a8638aa50fb47acaa44c3120dd7453a27a4830397361890d7357573d5531328c. Jul 9 10:12:28.499023 systemd[1]: Started cri-containerd-db20edd656d324ce4dce9ecdcde53db9a935e762ebeef48e159e2603ec1cf4ef.scope - libcontainer container db20edd656d324ce4dce9ecdcde53db9a935e762ebeef48e159e2603ec1cf4ef. Jul 9 10:12:28.540573 containerd[1498]: time="2025-07-09T10:12:28.540528067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8ff6p,Uid:8f9a4a42-1dcf-4eac-a285-40b91c0f6177,Namespace:kube-system,Attempt:0,} returns sandbox id \"db20edd656d324ce4dce9ecdcde53db9a935e762ebeef48e159e2603ec1cf4ef\"" Jul 9 10:12:28.541997 containerd[1498]: time="2025-07-09T10:12:28.541918915Z" level=info msg="StartContainer for \"a8638aa50fb47acaa44c3120dd7453a27a4830397361890d7357573d5531328c\" returns successfully" Jul 9 10:12:29.269865 kubelet[2629]: I0709 10:12:29.269709 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pd2t7" podStartSLOduration=1.269691056 podStartE2EDuration="1.269691056s" podCreationTimestamp="2025-07-09 10:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:12:28.569191353 +0000 UTC m=+8.115340054" watchObservedRunningTime="2025-07-09 10:12:29.269691056 +0000 UTC m=+8.815839717" Jul 9 10:12:33.351586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3666487790.mount: Deactivated successfully. Jul 9 10:12:34.698290 containerd[1498]: time="2025-07-09T10:12:34.698232744Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:34.698899 containerd[1498]: time="2025-07-09T10:12:34.698855434Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 9 10:12:34.699704 containerd[1498]: time="2025-07-09T10:12:34.699634777Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:34.701232 containerd[1498]: time="2025-07-09T10:12:34.701192303Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.260315308s" Jul 9 10:12:34.701332 containerd[1498]: time="2025-07-09T10:12:34.701238638Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 9 10:12:34.711459 containerd[1498]: time="2025-07-09T10:12:34.711406230Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 9 10:12:34.715192 containerd[1498]: time="2025-07-09T10:12:34.715142571Z" level=info msg="CreateContainer within sandbox \"f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 10:12:34.724643 containerd[1498]: time="2025-07-09T10:12:34.724594161Z" level=info msg="Container 550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:34.731160 containerd[1498]: time="2025-07-09T10:12:34.731099076Z" level=info msg="CreateContainer within sandbox \"f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087\"" Jul 9 10:12:34.733788 containerd[1498]: time="2025-07-09T10:12:34.733740848Z" level=info msg="StartContainer for \"550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087\"" Jul 9 10:12:34.734882 containerd[1498]: time="2025-07-09T10:12:34.734853663Z" level=info msg="connecting to shim 550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087" address="unix:///run/containerd/s/8bb884bc310f0866e0d3549f30d61e580d15d6af48cc4db394f9c1aa7f4fe0df" protocol=ttrpc version=3 Jul 9 10:12:34.795904 systemd[1]: Started cri-containerd-550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087.scope - libcontainer container 550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087. Jul 9 10:12:34.851403 containerd[1498]: time="2025-07-09T10:12:34.851353342Z" level=info msg="StartContainer for \"550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087\" returns successfully" Jul 9 10:12:34.906658 systemd[1]: cri-containerd-550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087.scope: Deactivated successfully. Jul 9 10:12:34.935011 containerd[1498]: time="2025-07-09T10:12:34.934951317Z" level=info msg="received exit event container_id:\"550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087\" id:\"550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087\" pid:3052 exited_at:{seconds:1752055954 nanos:926895438}" Jul 9 10:12:34.937427 containerd[1498]: time="2025-07-09T10:12:34.937381977Z" level=info msg="TaskExit event in podsandbox handler container_id:\"550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087\" id:\"550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087\" pid:3052 exited_at:{seconds:1752055954 nanos:926895438}" Jul 9 10:12:34.971657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087-rootfs.mount: Deactivated successfully. Jul 9 10:12:35.589716 containerd[1498]: time="2025-07-09T10:12:35.589504284Z" level=info msg="CreateContainer within sandbox \"f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 10:12:35.601704 containerd[1498]: time="2025-07-09T10:12:35.601159299Z" level=info msg="Container c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:35.609248 containerd[1498]: time="2025-07-09T10:12:35.609056710Z" level=info msg="CreateContainer within sandbox \"f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a\"" Jul 9 10:12:35.609920 containerd[1498]: time="2025-07-09T10:12:35.609884135Z" level=info msg="StartContainer for \"c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a\"" Jul 9 10:12:35.611024 containerd[1498]: time="2025-07-09T10:12:35.610959479Z" level=info msg="connecting to shim c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a" address="unix:///run/containerd/s/8bb884bc310f0866e0d3549f30d61e580d15d6af48cc4db394f9c1aa7f4fe0df" protocol=ttrpc version=3 Jul 9 10:12:35.631867 systemd[1]: Started cri-containerd-c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a.scope - libcontainer container c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a. Jul 9 10:12:35.655180 containerd[1498]: time="2025-07-09T10:12:35.655121032Z" level=info msg="StartContainer for \"c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a\" returns successfully" Jul 9 10:12:35.676555 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 10:12:35.676804 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 10:12:35.677010 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 9 10:12:35.678978 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 10:12:35.680358 systemd[1]: cri-containerd-c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a.scope: Deactivated successfully. Jul 9 10:12:35.680528 containerd[1498]: time="2025-07-09T10:12:35.680495524Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a\" id:\"c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a\" pid:3097 exited_at:{seconds:1752055955 nanos:680221556}" Jul 9 10:12:35.680584 containerd[1498]: time="2025-07-09T10:12:35.680569148Z" level=info msg="received exit event container_id:\"c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a\" id:\"c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a\" pid:3097 exited_at:{seconds:1752055955 nanos:680221556}" Jul 9 10:12:35.712755 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 10:12:35.749477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2130817775.mount: Deactivated successfully. Jul 9 10:12:36.437552 update_engine[1485]: I20250709 10:12:36.437481 1485 update_attempter.cc:509] Updating boot flags... Jul 9 10:12:36.600794 containerd[1498]: time="2025-07-09T10:12:36.600754183Z" level=info msg="CreateContainer within sandbox \"f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 10:12:36.629873 containerd[1498]: time="2025-07-09T10:12:36.629754134Z" level=info msg="Container 41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:36.634646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681693795.mount: Deactivated successfully. Jul 9 10:12:36.639021 containerd[1498]: time="2025-07-09T10:12:36.638972821Z" level=info msg="CreateContainer within sandbox \"f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20\"" Jul 9 10:12:36.640374 containerd[1498]: time="2025-07-09T10:12:36.640326153Z" level=info msg="StartContainer for \"41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20\"" Jul 9 10:12:36.641759 containerd[1498]: time="2025-07-09T10:12:36.641727820Z" level=info msg="connecting to shim 41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20" address="unix:///run/containerd/s/8bb884bc310f0866e0d3549f30d61e580d15d6af48cc4db394f9c1aa7f4fe0df" protocol=ttrpc version=3 Jul 9 10:12:36.665854 systemd[1]: Started cri-containerd-41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20.scope - libcontainer container 41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20. Jul 9 10:12:36.696661 containerd[1498]: time="2025-07-09T10:12:36.696567559Z" level=info msg="StartContainer for \"41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20\" returns successfully" Jul 9 10:12:36.721580 systemd[1]: cri-containerd-41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20.scope: Deactivated successfully. Jul 9 10:12:36.723818 containerd[1498]: time="2025-07-09T10:12:36.723778205Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20\" id:\"41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20\" pid:3170 exited_at:{seconds:1752055956 nanos:723470831}" Jul 9 10:12:36.731385 containerd[1498]: time="2025-07-09T10:12:36.731337827Z" level=info msg="received exit event container_id:\"41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20\" id:\"41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20\" pid:3170 exited_at:{seconds:1752055956 nanos:723470831}" Jul 9 10:12:36.752631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20-rootfs.mount: Deactivated successfully. Jul 9 10:12:36.980795 containerd[1498]: time="2025-07-09T10:12:36.980688317Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:36.981458 containerd[1498]: time="2025-07-09T10:12:36.981390330Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 9 10:12:36.982642 containerd[1498]: time="2025-07-09T10:12:36.982533078Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:12:36.983708 containerd[1498]: time="2025-07-09T10:12:36.983659461Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.272211258s" Jul 9 10:12:36.983906 containerd[1498]: time="2025-07-09T10:12:36.983805066Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 9 10:12:36.987033 containerd[1498]: time="2025-07-09T10:12:36.987004000Z" level=info msg="CreateContainer within sandbox \"db20edd656d324ce4dce9ecdcde53db9a935e762ebeef48e159e2603ec1cf4ef\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 9 10:12:36.993715 containerd[1498]: time="2025-07-09T10:12:36.993091213Z" level=info msg="Container b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:36.996331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2995013094.mount: Deactivated successfully. Jul 9 10:12:36.999589 containerd[1498]: time="2025-07-09T10:12:36.999546259Z" level=info msg="CreateContainer within sandbox \"db20edd656d324ce4dce9ecdcde53db9a935e762ebeef48e159e2603ec1cf4ef\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc\"" Jul 9 10:12:37.000283 containerd[1498]: time="2025-07-09T10:12:37.000258676Z" level=info msg="StartContainer for \"b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc\"" Jul 9 10:12:37.001576 containerd[1498]: time="2025-07-09T10:12:37.001505483Z" level=info msg="connecting to shim b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc" address="unix:///run/containerd/s/c8d2bf795304001d7af7b4e04e8de88c7d964bc08262f616b5ad6777ffc43e1a" protocol=ttrpc version=3 Jul 9 10:12:37.032847 systemd[1]: Started cri-containerd-b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc.scope - libcontainer container b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc. Jul 9 10:12:37.057352 containerd[1498]: time="2025-07-09T10:12:37.057314242Z" level=info msg="StartContainer for \"b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc\" returns successfully" Jul 9 10:12:37.604372 containerd[1498]: time="2025-07-09T10:12:37.604314582Z" level=info msg="CreateContainer within sandbox \"f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 10:12:37.608936 kubelet[2629]: I0709 10:12:37.608874 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-8ff6p" podStartSLOduration=1.166463476 podStartE2EDuration="9.608796559s" podCreationTimestamp="2025-07-09 10:12:28 +0000 UTC" firstStartedPulling="2025-07-09 10:12:28.54218616 +0000 UTC m=+8.088334821" lastFinishedPulling="2025-07-09 10:12:36.984519243 +0000 UTC m=+16.530667904" observedRunningTime="2025-07-09 10:12:37.60838332 +0000 UTC m=+17.154532021" watchObservedRunningTime="2025-07-09 10:12:37.608796559 +0000 UTC m=+17.154945180" Jul 9 10:12:37.638207 containerd[1498]: time="2025-07-09T10:12:37.638157020Z" level=info msg="Container b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:37.647382 containerd[1498]: time="2025-07-09T10:12:37.647320034Z" level=info msg="CreateContainer within sandbox \"f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8\"" Jul 9 10:12:37.650450 containerd[1498]: time="2025-07-09T10:12:37.649231347Z" level=info msg="StartContainer for \"b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8\"" Jul 9 10:12:37.650450 containerd[1498]: time="2025-07-09T10:12:37.650087875Z" level=info msg="connecting to shim b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8" address="unix:///run/containerd/s/8bb884bc310f0866e0d3549f30d61e580d15d6af48cc4db394f9c1aa7f4fe0df" protocol=ttrpc version=3 Jul 9 10:12:37.696258 systemd[1]: Started cri-containerd-b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8.scope - libcontainer container b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8. Jul 9 10:12:37.730228 systemd[1]: cri-containerd-b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8.scope: Deactivated successfully. Jul 9 10:12:37.732088 containerd[1498]: time="2025-07-09T10:12:37.732044045Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8\" id:\"b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8\" pid:3253 exited_at:{seconds:1752055957 nanos:731609319}" Jul 9 10:12:37.758300 containerd[1498]: time="2025-07-09T10:12:37.758200738Z" level=info msg="received exit event container_id:\"b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8\" id:\"b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8\" pid:3253 exited_at:{seconds:1752055957 nanos:731609319}" Jul 9 10:12:37.767582 containerd[1498]: time="2025-07-09T10:12:37.767524798Z" level=info msg="StartContainer for \"b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8\" returns successfully" Jul 9 10:12:37.779817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8-rootfs.mount: Deactivated successfully. Jul 9 10:12:38.611493 containerd[1498]: time="2025-07-09T10:12:38.611436669Z" level=info msg="CreateContainer within sandbox \"f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 10:12:38.626344 containerd[1498]: time="2025-07-09T10:12:38.626301365Z" level=info msg="Container cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:38.633349 containerd[1498]: time="2025-07-09T10:12:38.633309655Z" level=info msg="CreateContainer within sandbox \"f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\"" Jul 9 10:12:38.643414 containerd[1498]: time="2025-07-09T10:12:38.643208303Z" level=info msg="StartContainer for \"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\"" Jul 9 10:12:38.645310 containerd[1498]: time="2025-07-09T10:12:38.645253306Z" level=info msg="connecting to shim cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd" address="unix:///run/containerd/s/8bb884bc310f0866e0d3549f30d61e580d15d6af48cc4db394f9c1aa7f4fe0df" protocol=ttrpc version=3 Jul 9 10:12:38.669871 systemd[1]: Started cri-containerd-cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd.scope - libcontainer container cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd. Jul 9 10:12:38.701515 containerd[1498]: time="2025-07-09T10:12:38.701456431Z" level=info msg="StartContainer for \"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\" returns successfully" Jul 9 10:12:38.799699 containerd[1498]: time="2025-07-09T10:12:38.798944209Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\" id:\"2e1c100eb0ee1c68e4c8feb4d386ef161ea3758de5bda868113fff7c00fc056e\" pid:3320 exited_at:{seconds:1752055958 nanos:798595833}" Jul 9 10:12:38.848873 kubelet[2629]: I0709 10:12:38.848798 2629 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 9 10:12:38.888961 systemd[1]: Created slice kubepods-burstable-podc34c3b07_8692_4945_b039_7587a6e410e7.slice - libcontainer container kubepods-burstable-podc34c3b07_8692_4945_b039_7587a6e410e7.slice. Jul 9 10:12:38.897511 systemd[1]: Created slice kubepods-burstable-pod6645c86f_8ab0_409c_a0c9_5d2c113b7c67.slice - libcontainer container kubepods-burstable-pod6645c86f_8ab0_409c_a0c9_5d2c113b7c67.slice. Jul 9 10:12:38.947939 kubelet[2629]: I0709 10:12:38.947882 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6645c86f-8ab0-409c-a0c9-5d2c113b7c67-config-volume\") pod \"coredns-668d6bf9bc-qqrwv\" (UID: \"6645c86f-8ab0-409c-a0c9-5d2c113b7c67\") " pod="kube-system/coredns-668d6bf9bc-qqrwv" Jul 9 10:12:38.947939 kubelet[2629]: I0709 10:12:38.947932 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c34c3b07-8692-4945-b039-7587a6e410e7-config-volume\") pod \"coredns-668d6bf9bc-77s5r\" (UID: \"c34c3b07-8692-4945-b039-7587a6e410e7\") " pod="kube-system/coredns-668d6bf9bc-77s5r" Jul 9 10:12:38.948108 kubelet[2629]: I0709 10:12:38.947953 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-457v5\" (UniqueName: \"kubernetes.io/projected/6645c86f-8ab0-409c-a0c9-5d2c113b7c67-kube-api-access-457v5\") pod \"coredns-668d6bf9bc-qqrwv\" (UID: \"6645c86f-8ab0-409c-a0c9-5d2c113b7c67\") " pod="kube-system/coredns-668d6bf9bc-qqrwv" Jul 9 10:12:38.948108 kubelet[2629]: I0709 10:12:38.947973 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twj7h\" (UniqueName: \"kubernetes.io/projected/c34c3b07-8692-4945-b039-7587a6e410e7-kube-api-access-twj7h\") pod \"coredns-668d6bf9bc-77s5r\" (UID: \"c34c3b07-8692-4945-b039-7587a6e410e7\") " pod="kube-system/coredns-668d6bf9bc-77s5r" Jul 9 10:12:39.194832 containerd[1498]: time="2025-07-09T10:12:39.194527246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-77s5r,Uid:c34c3b07-8692-4945-b039-7587a6e410e7,Namespace:kube-system,Attempt:0,}" Jul 9 10:12:39.202244 containerd[1498]: time="2025-07-09T10:12:39.202204780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qqrwv,Uid:6645c86f-8ab0-409c-a0c9-5d2c113b7c67,Namespace:kube-system,Attempt:0,}" Jul 9 10:12:40.912483 systemd-networkd[1436]: cilium_host: Link UP Jul 9 10:12:40.912609 systemd-networkd[1436]: cilium_net: Link UP Jul 9 10:12:40.912756 systemd-networkd[1436]: cilium_net: Gained carrier Jul 9 10:12:40.912876 systemd-networkd[1436]: cilium_host: Gained carrier Jul 9 10:12:41.004426 systemd-networkd[1436]: cilium_vxlan: Link UP Jul 9 10:12:41.004610 systemd-networkd[1436]: cilium_vxlan: Gained carrier Jul 9 10:12:41.291714 kernel: NET: Registered PF_ALG protocol family Jul 9 10:12:41.292796 systemd-networkd[1436]: cilium_net: Gained IPv6LL Jul 9 10:12:41.627856 systemd-networkd[1436]: cilium_host: Gained IPv6LL Jul 9 10:12:41.914280 systemd-networkd[1436]: lxc_health: Link UP Jul 9 10:12:41.919553 systemd-networkd[1436]: lxc_health: Gained carrier Jul 9 10:12:42.304667 systemd-networkd[1436]: lxcd2b316639eea: Link UP Jul 9 10:12:42.314602 kernel: eth0: renamed from tmpc9845 Jul 9 10:12:42.315764 systemd-networkd[1436]: lxcd2b316639eea: Gained carrier Jul 9 10:12:42.317194 systemd-networkd[1436]: lxc7bf5d301202c: Link UP Jul 9 10:12:42.324125 kernel: eth0: renamed from tmp2f255 Jul 9 10:12:42.324633 systemd-networkd[1436]: lxc7bf5d301202c: Gained carrier Jul 9 10:12:42.390836 kubelet[2629]: I0709 10:12:42.390748 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p8vzj" podStartSLOduration=8.11906112 podStartE2EDuration="14.390732626s" podCreationTimestamp="2025-07-09 10:12:28 +0000 UTC" firstStartedPulling="2025-07-09 10:12:28.439580592 +0000 UTC m=+7.985729213" lastFinishedPulling="2025-07-09 10:12:34.711252058 +0000 UTC m=+14.257400719" observedRunningTime="2025-07-09 10:12:39.628272321 +0000 UTC m=+19.174420982" watchObservedRunningTime="2025-07-09 10:12:42.390732626 +0000 UTC m=+21.936881247" Jul 9 10:12:42.459854 systemd-networkd[1436]: cilium_vxlan: Gained IPv6LL Jul 9 10:12:43.356159 systemd-networkd[1436]: lxc_health: Gained IPv6LL Jul 9 10:12:43.803827 systemd-networkd[1436]: lxc7bf5d301202c: Gained IPv6LL Jul 9 10:12:44.251920 systemd-networkd[1436]: lxcd2b316639eea: Gained IPv6LL Jul 9 10:12:45.889330 containerd[1498]: time="2025-07-09T10:12:45.889267538Z" level=info msg="connecting to shim 2f255d0f28c4b4a8525bfbe9a5088cbed1ee9378063c36aa4e46aac7178eeff3" address="unix:///run/containerd/s/2d43347c3af9a1b6f633ce7549a78e7f2036cd9235192eeddf8e60b408546f09" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:12:45.891232 containerd[1498]: time="2025-07-09T10:12:45.891177198Z" level=info msg="connecting to shim c984517a3e4134711ad0c6b9ffb17ef962234e2da21ecc58ea516f65e417bde6" address="unix:///run/containerd/s/ed6cce27988874dfe72ca2afc0594b99c2a99b1dfd8405a96855e23a7b52a0d3" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:12:45.913905 systemd[1]: Started cri-containerd-c984517a3e4134711ad0c6b9ffb17ef962234e2da21ecc58ea516f65e417bde6.scope - libcontainer container c984517a3e4134711ad0c6b9ffb17ef962234e2da21ecc58ea516f65e417bde6. Jul 9 10:12:45.931880 systemd[1]: Started cri-containerd-2f255d0f28c4b4a8525bfbe9a5088cbed1ee9378063c36aa4e46aac7178eeff3.scope - libcontainer container 2f255d0f28c4b4a8525bfbe9a5088cbed1ee9378063c36aa4e46aac7178eeff3. Jul 9 10:12:45.936499 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 10:12:45.944707 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 10:12:45.966566 containerd[1498]: time="2025-07-09T10:12:45.966080103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qqrwv,Uid:6645c86f-8ab0-409c-a0c9-5d2c113b7c67,Namespace:kube-system,Attempt:0,} returns sandbox id \"c984517a3e4134711ad0c6b9ffb17ef962234e2da21ecc58ea516f65e417bde6\"" Jul 9 10:12:45.975446 containerd[1498]: time="2025-07-09T10:12:45.971365514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-77s5r,Uid:c34c3b07-8692-4945-b039-7587a6e410e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f255d0f28c4b4a8525bfbe9a5088cbed1ee9378063c36aa4e46aac7178eeff3\"" Jul 9 10:12:45.979397 containerd[1498]: time="2025-07-09T10:12:45.979357065Z" level=info msg="CreateContainer within sandbox \"2f255d0f28c4b4a8525bfbe9a5088cbed1ee9378063c36aa4e46aac7178eeff3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 10:12:45.979658 containerd[1498]: time="2025-07-09T10:12:45.979627838Z" level=info msg="CreateContainer within sandbox \"c984517a3e4134711ad0c6b9ffb17ef962234e2da21ecc58ea516f65e417bde6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 10:12:45.994984 containerd[1498]: time="2025-07-09T10:12:45.994935885Z" level=info msg="Container 72543414e5f7f380437df03d69d3a44339d84d565f9fde122c2310607ac47ca5: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:45.996100 containerd[1498]: time="2025-07-09T10:12:45.996071631Z" level=info msg="Container 3992c7176e3cd4f8490b7f3790450f2b64cd2f6d0ce8e29f002fc60d863e4312: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:12:46.001856 containerd[1498]: time="2025-07-09T10:12:46.001807931Z" level=info msg="CreateContainer within sandbox \"2f255d0f28c4b4a8525bfbe9a5088cbed1ee9378063c36aa4e46aac7178eeff3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"72543414e5f7f380437df03d69d3a44339d84d565f9fde122c2310607ac47ca5\"" Jul 9 10:12:46.003082 containerd[1498]: time="2025-07-09T10:12:46.002963871Z" level=info msg="StartContainer for \"72543414e5f7f380437df03d69d3a44339d84d565f9fde122c2310607ac47ca5\"" Jul 9 10:12:46.003510 containerd[1498]: time="2025-07-09T10:12:46.003473208Z" level=info msg="CreateContainer within sandbox \"c984517a3e4134711ad0c6b9ffb17ef962234e2da21ecc58ea516f65e417bde6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3992c7176e3cd4f8490b7f3790450f2b64cd2f6d0ce8e29f002fc60d863e4312\"" Jul 9 10:12:46.004057 containerd[1498]: time="2025-07-09T10:12:46.004021553Z" level=info msg="StartContainer for \"3992c7176e3cd4f8490b7f3790450f2b64cd2f6d0ce8e29f002fc60d863e4312\"" Jul 9 10:12:46.004572 containerd[1498]: time="2025-07-09T10:12:46.004533691Z" level=info msg="connecting to shim 72543414e5f7f380437df03d69d3a44339d84d565f9fde122c2310607ac47ca5" address="unix:///run/containerd/s/2d43347c3af9a1b6f633ce7549a78e7f2036cd9235192eeddf8e60b408546f09" protocol=ttrpc version=3 Jul 9 10:12:46.005030 containerd[1498]: time="2025-07-09T10:12:46.004995099Z" level=info msg="connecting to shim 3992c7176e3cd4f8490b7f3790450f2b64cd2f6d0ce8e29f002fc60d863e4312" address="unix:///run/containerd/s/ed6cce27988874dfe72ca2afc0594b99c2a99b1dfd8405a96855e23a7b52a0d3" protocol=ttrpc version=3 Jul 9 10:12:46.026893 systemd[1]: Started cri-containerd-3992c7176e3cd4f8490b7f3790450f2b64cd2f6d0ce8e29f002fc60d863e4312.scope - libcontainer container 3992c7176e3cd4f8490b7f3790450f2b64cd2f6d0ce8e29f002fc60d863e4312. Jul 9 10:12:46.029039 systemd[1]: Started cri-containerd-72543414e5f7f380437df03d69d3a44339d84d565f9fde122c2310607ac47ca5.scope - libcontainer container 72543414e5f7f380437df03d69d3a44339d84d565f9fde122c2310607ac47ca5. Jul 9 10:12:46.062163 containerd[1498]: time="2025-07-09T10:12:46.062119627Z" level=info msg="StartContainer for \"3992c7176e3cd4f8490b7f3790450f2b64cd2f6d0ce8e29f002fc60d863e4312\" returns successfully" Jul 9 10:12:46.067460 containerd[1498]: time="2025-07-09T10:12:46.067411996Z" level=info msg="StartContainer for \"72543414e5f7f380437df03d69d3a44339d84d565f9fde122c2310607ac47ca5\" returns successfully" Jul 9 10:12:46.640083 kubelet[2629]: I0709 10:12:46.639950 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-77s5r" podStartSLOduration=18.639930407 podStartE2EDuration="18.639930407s" podCreationTimestamp="2025-07-09 10:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:12:46.63989012 +0000 UTC m=+26.186038781" watchObservedRunningTime="2025-07-09 10:12:46.639930407 +0000 UTC m=+26.186079108" Jul 9 10:12:46.655015 kubelet[2629]: I0709 10:12:46.654948 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qqrwv" podStartSLOduration=18.654930747 podStartE2EDuration="18.654930747s" podCreationTimestamp="2025-07-09 10:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:12:46.654667857 +0000 UTC m=+26.200816518" watchObservedRunningTime="2025-07-09 10:12:46.654930747 +0000 UTC m=+26.201079408" Jul 9 10:12:46.873499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2748591684.mount: Deactivated successfully. Jul 9 10:12:49.069317 systemd[1]: Started sshd@7-10.0.0.140:22-10.0.0.1:51886.service - OpenSSH per-connection server daemon (10.0.0.1:51886). Jul 9 10:12:49.129070 sshd[3965]: Accepted publickey for core from 10.0.0.1 port 51886 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:12:49.130554 sshd-session[3965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:12:49.135771 systemd-logind[1483]: New session 8 of user core. Jul 9 10:12:49.150894 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 10:12:49.313188 sshd[3968]: Connection closed by 10.0.0.1 port 51886 Jul 9 10:12:49.313854 sshd-session[3965]: pam_unix(sshd:session): session closed for user core Jul 9 10:12:49.319082 systemd[1]: sshd@7-10.0.0.140:22-10.0.0.1:51886.service: Deactivated successfully. Jul 9 10:12:49.322085 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 10:12:49.323216 systemd-logind[1483]: Session 8 logged out. Waiting for processes to exit. Jul 9 10:12:49.325670 systemd-logind[1483]: Removed session 8. Jul 9 10:12:50.700053 kubelet[2629]: I0709 10:12:50.699987 2629 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 10:12:54.335959 systemd[1]: Started sshd@8-10.0.0.140:22-10.0.0.1:42948.service - OpenSSH per-connection server daemon (10.0.0.1:42948). Jul 9 10:12:54.407354 sshd[3984]: Accepted publickey for core from 10.0.0.1 port 42948 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:12:54.411287 sshd-session[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:12:54.416963 systemd-logind[1483]: New session 9 of user core. Jul 9 10:12:54.429875 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 10:12:54.561079 sshd[3987]: Connection closed by 10.0.0.1 port 42948 Jul 9 10:12:54.559867 sshd-session[3984]: pam_unix(sshd:session): session closed for user core Jul 9 10:12:54.565171 systemd[1]: sshd@8-10.0.0.140:22-10.0.0.1:42948.service: Deactivated successfully. Jul 9 10:12:54.567158 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 10:12:54.572076 systemd-logind[1483]: Session 9 logged out. Waiting for processes to exit. Jul 9 10:12:54.573768 systemd-logind[1483]: Removed session 9. Jul 9 10:12:59.576236 systemd[1]: Started sshd@9-10.0.0.140:22-10.0.0.1:42950.service - OpenSSH per-connection server daemon (10.0.0.1:42950). Jul 9 10:12:59.635635 sshd[4004]: Accepted publickey for core from 10.0.0.1 port 42950 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:12:59.636939 sshd-session[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:12:59.641431 systemd-logind[1483]: New session 10 of user core. Jul 9 10:12:59.654889 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 9 10:12:59.770073 sshd[4007]: Connection closed by 10.0.0.1 port 42950 Jul 9 10:12:59.769374 sshd-session[4004]: pam_unix(sshd:session): session closed for user core Jul 9 10:12:59.783212 systemd[1]: sshd@9-10.0.0.140:22-10.0.0.1:42950.service: Deactivated successfully. Jul 9 10:12:59.785753 systemd[1]: session-10.scope: Deactivated successfully. Jul 9 10:12:59.786835 systemd-logind[1483]: Session 10 logged out. Waiting for processes to exit. Jul 9 10:12:59.789545 systemd-logind[1483]: Removed session 10. Jul 9 10:12:59.791405 systemd[1]: Started sshd@10-10.0.0.140:22-10.0.0.1:42958.service - OpenSSH per-connection server daemon (10.0.0.1:42958). Jul 9 10:12:59.846517 sshd[4021]: Accepted publickey for core from 10.0.0.1 port 42958 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:12:59.848257 sshd-session[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:12:59.853405 systemd-logind[1483]: New session 11 of user core. Jul 9 10:12:59.863857 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 9 10:13:00.021902 sshd[4024]: Connection closed by 10.0.0.1 port 42958 Jul 9 10:13:00.021243 sshd-session[4021]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:00.033695 systemd[1]: sshd@10-10.0.0.140:22-10.0.0.1:42958.service: Deactivated successfully. Jul 9 10:13:00.035968 systemd[1]: session-11.scope: Deactivated successfully. Jul 9 10:13:00.037212 systemd-logind[1483]: Session 11 logged out. Waiting for processes to exit. Jul 9 10:13:00.042022 systemd[1]: Started sshd@11-10.0.0.140:22-10.0.0.1:42968.service - OpenSSH per-connection server daemon (10.0.0.1:42968). Jul 9 10:13:00.043834 systemd-logind[1483]: Removed session 11. Jul 9 10:13:00.103462 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 42968 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:00.104977 sshd-session[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:00.109569 systemd-logind[1483]: New session 12 of user core. Jul 9 10:13:00.118834 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 9 10:13:00.230347 sshd[4039]: Connection closed by 10.0.0.1 port 42968 Jul 9 10:13:00.230712 sshd-session[4036]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:00.234425 systemd[1]: sshd@11-10.0.0.140:22-10.0.0.1:42968.service: Deactivated successfully. Jul 9 10:13:00.236818 systemd[1]: session-12.scope: Deactivated successfully. Jul 9 10:13:00.237988 systemd-logind[1483]: Session 12 logged out. Waiting for processes to exit. Jul 9 10:13:00.239611 systemd-logind[1483]: Removed session 12. Jul 9 10:13:05.251178 systemd[1]: Started sshd@12-10.0.0.140:22-10.0.0.1:37118.service - OpenSSH per-connection server daemon (10.0.0.1:37118). Jul 9 10:13:05.302353 sshd[4052]: Accepted publickey for core from 10.0.0.1 port 37118 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:05.308376 sshd-session[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:05.314707 systemd-logind[1483]: New session 13 of user core. Jul 9 10:13:05.327918 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 9 10:13:05.463239 sshd[4055]: Connection closed by 10.0.0.1 port 37118 Jul 9 10:13:05.463223 sshd-session[4052]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:05.466749 systemd[1]: sshd@12-10.0.0.140:22-10.0.0.1:37118.service: Deactivated successfully. Jul 9 10:13:05.468537 systemd[1]: session-13.scope: Deactivated successfully. Jul 9 10:13:05.470877 systemd-logind[1483]: Session 13 logged out. Waiting for processes to exit. Jul 9 10:13:05.472110 systemd-logind[1483]: Removed session 13. Jul 9 10:13:10.481639 systemd[1]: Started sshd@13-10.0.0.140:22-10.0.0.1:37124.service - OpenSSH per-connection server daemon (10.0.0.1:37124). Jul 9 10:13:10.541232 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 37124 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:10.542941 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:10.550251 systemd-logind[1483]: New session 14 of user core. Jul 9 10:13:10.561706 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 9 10:13:10.688968 sshd[4071]: Connection closed by 10.0.0.1 port 37124 Jul 9 10:13:10.689975 sshd-session[4068]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:10.702921 systemd[1]: sshd@13-10.0.0.140:22-10.0.0.1:37124.service: Deactivated successfully. Jul 9 10:13:10.705311 systemd[1]: session-14.scope: Deactivated successfully. Jul 9 10:13:10.706133 systemd-logind[1483]: Session 14 logged out. Waiting for processes to exit. Jul 9 10:13:10.709117 systemd[1]: Started sshd@14-10.0.0.140:22-10.0.0.1:37136.service - OpenSSH per-connection server daemon (10.0.0.1:37136). Jul 9 10:13:10.710788 systemd-logind[1483]: Removed session 14. Jul 9 10:13:10.761887 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 37136 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:10.764305 sshd-session[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:10.769654 systemd-logind[1483]: New session 15 of user core. Jul 9 10:13:10.779850 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 9 10:13:11.023806 sshd[4088]: Connection closed by 10.0.0.1 port 37136 Jul 9 10:13:11.024937 sshd-session[4085]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:11.037037 systemd[1]: sshd@14-10.0.0.140:22-10.0.0.1:37136.service: Deactivated successfully. Jul 9 10:13:11.039324 systemd[1]: session-15.scope: Deactivated successfully. Jul 9 10:13:11.042135 systemd-logind[1483]: Session 15 logged out. Waiting for processes to exit. Jul 9 10:13:11.044498 systemd[1]: Started sshd@15-10.0.0.140:22-10.0.0.1:37152.service - OpenSSH per-connection server daemon (10.0.0.1:37152). Jul 9 10:13:11.046060 systemd-logind[1483]: Removed session 15. Jul 9 10:13:11.113850 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 37152 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:11.115312 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:11.124396 systemd-logind[1483]: New session 16 of user core. Jul 9 10:13:11.133853 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 9 10:13:11.911334 sshd[4103]: Connection closed by 10.0.0.1 port 37152 Jul 9 10:13:11.911879 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:11.924306 systemd[1]: sshd@15-10.0.0.140:22-10.0.0.1:37152.service: Deactivated successfully. Jul 9 10:13:11.928389 systemd[1]: session-16.scope: Deactivated successfully. Jul 9 10:13:11.932175 systemd-logind[1483]: Session 16 logged out. Waiting for processes to exit. Jul 9 10:13:11.935113 systemd[1]: Started sshd@16-10.0.0.140:22-10.0.0.1:37156.service - OpenSSH per-connection server daemon (10.0.0.1:37156). Jul 9 10:13:11.935630 systemd-logind[1483]: Removed session 16. Jul 9 10:13:11.986567 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 37156 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:11.987918 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:11.992207 systemd-logind[1483]: New session 17 of user core. Jul 9 10:13:11.998823 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 9 10:13:12.226421 sshd[4126]: Connection closed by 10.0.0.1 port 37156 Jul 9 10:13:12.225335 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:12.238213 systemd[1]: sshd@16-10.0.0.140:22-10.0.0.1:37156.service: Deactivated successfully. Jul 9 10:13:12.239965 systemd[1]: session-17.scope: Deactivated successfully. Jul 9 10:13:12.240599 systemd-logind[1483]: Session 17 logged out. Waiting for processes to exit. Jul 9 10:13:12.242786 systemd[1]: Started sshd@17-10.0.0.140:22-10.0.0.1:37166.service - OpenSSH per-connection server daemon (10.0.0.1:37166). Jul 9 10:13:12.243586 systemd-logind[1483]: Removed session 17. Jul 9 10:13:12.304053 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 37166 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:12.305297 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:12.309289 systemd-logind[1483]: New session 18 of user core. Jul 9 10:13:12.316824 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 9 10:13:12.424749 sshd[4140]: Connection closed by 10.0.0.1 port 37166 Jul 9 10:13:12.425279 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:12.429816 systemd[1]: sshd@17-10.0.0.140:22-10.0.0.1:37166.service: Deactivated successfully. Jul 9 10:13:12.431557 systemd[1]: session-18.scope: Deactivated successfully. Jul 9 10:13:12.432315 systemd-logind[1483]: Session 18 logged out. Waiting for processes to exit. Jul 9 10:13:12.433373 systemd-logind[1483]: Removed session 18. Jul 9 10:13:17.440854 systemd[1]: Started sshd@18-10.0.0.140:22-10.0.0.1:42960.service - OpenSSH per-connection server daemon (10.0.0.1:42960). Jul 9 10:13:17.491772 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 42960 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:17.492794 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:17.496408 systemd-logind[1483]: New session 19 of user core. Jul 9 10:13:17.509896 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 9 10:13:17.620517 sshd[4158]: Connection closed by 10.0.0.1 port 42960 Jul 9 10:13:17.620889 sshd-session[4155]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:17.624656 systemd-logind[1483]: Session 19 logged out. Waiting for processes to exit. Jul 9 10:13:17.624935 systemd[1]: sshd@18-10.0.0.140:22-10.0.0.1:42960.service: Deactivated successfully. Jul 9 10:13:17.626543 systemd[1]: session-19.scope: Deactivated successfully. Jul 9 10:13:17.628191 systemd-logind[1483]: Removed session 19. Jul 9 10:13:22.635862 systemd[1]: Started sshd@19-10.0.0.140:22-10.0.0.1:45432.service - OpenSSH per-connection server daemon (10.0.0.1:45432). Jul 9 10:13:22.688978 sshd[4174]: Accepted publickey for core from 10.0.0.1 port 45432 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:22.690092 sshd-session[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:22.694185 systemd-logind[1483]: New session 20 of user core. Jul 9 10:13:22.709919 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 9 10:13:22.815487 sshd[4177]: Connection closed by 10.0.0.1 port 45432 Jul 9 10:13:22.816154 sshd-session[4174]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:22.819466 systemd[1]: sshd@19-10.0.0.140:22-10.0.0.1:45432.service: Deactivated successfully. Jul 9 10:13:22.821313 systemd[1]: session-20.scope: Deactivated successfully. Jul 9 10:13:22.822103 systemd-logind[1483]: Session 20 logged out. Waiting for processes to exit. Jul 9 10:13:22.823239 systemd-logind[1483]: Removed session 20. Jul 9 10:13:27.831032 systemd[1]: Started sshd@20-10.0.0.140:22-10.0.0.1:45446.service - OpenSSH per-connection server daemon (10.0.0.1:45446). Jul 9 10:13:27.906514 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 45446 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:27.907875 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:27.913045 systemd-logind[1483]: New session 21 of user core. Jul 9 10:13:27.919855 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 9 10:13:28.028397 sshd[4193]: Connection closed by 10.0.0.1 port 45446 Jul 9 10:13:28.029138 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:28.037285 systemd[1]: sshd@20-10.0.0.140:22-10.0.0.1:45446.service: Deactivated successfully. Jul 9 10:13:28.040288 systemd[1]: session-21.scope: Deactivated successfully. Jul 9 10:13:28.041260 systemd-logind[1483]: Session 21 logged out. Waiting for processes to exit. Jul 9 10:13:28.044083 systemd[1]: Started sshd@21-10.0.0.140:22-10.0.0.1:45454.service - OpenSSH per-connection server daemon (10.0.0.1:45454). Jul 9 10:13:28.044912 systemd-logind[1483]: Removed session 21. Jul 9 10:13:28.096767 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 45454 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:28.097942 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:28.102741 systemd-logind[1483]: New session 22 of user core. Jul 9 10:13:28.115904 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 9 10:13:30.445399 containerd[1498]: time="2025-07-09T10:13:30.445340489Z" level=info msg="StopContainer for \"b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc\" with timeout 30 (s)" Jul 9 10:13:30.455869 containerd[1498]: time="2025-07-09T10:13:30.455603987Z" level=info msg="Stop container \"b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc\" with signal terminated" Jul 9 10:13:30.478802 systemd[1]: cri-containerd-b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc.scope: Deactivated successfully. Jul 9 10:13:30.481342 containerd[1498]: time="2025-07-09T10:13:30.481295230Z" level=info msg="received exit event container_id:\"b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc\" id:\"b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc\" pid:3216 exited_at:{seconds:1752056010 nanos:480964735}" Jul 9 10:13:30.481527 containerd[1498]: time="2025-07-09T10:13:30.481448379Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc\" id:\"b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc\" pid:3216 exited_at:{seconds:1752056010 nanos:480964735}" Jul 9 10:13:30.489467 containerd[1498]: time="2025-07-09T10:13:30.489372775Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 10:13:30.495040 containerd[1498]: time="2025-07-09T10:13:30.494875476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\" id:\"589a13ee8454186143bcfc387bec555156d9125c4e5beaece39b9f1b73b437ce\" pid:4237 exited_at:{seconds:1752056010 nanos:493823396}" Jul 9 10:13:30.497409 containerd[1498]: time="2025-07-09T10:13:30.497377885Z" level=info msg="StopContainer for \"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\" with timeout 2 (s)" Jul 9 10:13:30.497925 containerd[1498]: time="2025-07-09T10:13:30.497893766Z" level=info msg="Stop container \"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\" with signal terminated" Jul 9 10:13:30.502806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc-rootfs.mount: Deactivated successfully. Jul 9 10:13:30.505717 systemd-networkd[1436]: lxc_health: Link DOWN Jul 9 10:13:30.505731 systemd-networkd[1436]: lxc_health: Lost carrier Jul 9 10:13:30.514166 containerd[1498]: time="2025-07-09T10:13:30.514111651Z" level=info msg="StopContainer for \"b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc\" returns successfully" Jul 9 10:13:30.518821 containerd[1498]: time="2025-07-09T10:13:30.518771616Z" level=info msg="StopPodSandbox for \"db20edd656d324ce4dce9ecdcde53db9a935e762ebeef48e159e2603ec1cf4ef\"" Jul 9 10:13:30.524624 systemd[1]: cri-containerd-cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd.scope: Deactivated successfully. Jul 9 10:13:30.525178 systemd[1]: cri-containerd-cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd.scope: Consumed 6.518s CPU time, 121M memory peak, 128K read from disk, 12.9M written to disk. Jul 9 10:13:30.526125 containerd[1498]: time="2025-07-09T10:13:30.526052981Z" level=info msg="received exit event container_id:\"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\" id:\"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\" pid:3289 exited_at:{seconds:1752056010 nanos:525591056}" Jul 9 10:13:30.526265 containerd[1498]: time="2025-07-09T10:13:30.526095338Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\" id:\"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\" pid:3289 exited_at:{seconds:1752056010 nanos:525591056}" Jul 9 10:13:30.538011 containerd[1498]: time="2025-07-09T10:13:30.537962994Z" level=info msg="Container to stop \"b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 10:13:30.545368 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd-rootfs.mount: Deactivated successfully. Jul 9 10:13:30.548352 systemd[1]: cri-containerd-db20edd656d324ce4dce9ecdcde53db9a935e762ebeef48e159e2603ec1cf4ef.scope: Deactivated successfully. Jul 9 10:13:30.549823 containerd[1498]: time="2025-07-09T10:13:30.549784814Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db20edd656d324ce4dce9ecdcde53db9a935e762ebeef48e159e2603ec1cf4ef\" id:\"db20edd656d324ce4dce9ecdcde53db9a935e762ebeef48e159e2603ec1cf4ef\" pid:2853 exit_status:137 exited_at:{seconds:1752056010 nanos:549152382}" Jul 9 10:13:30.561890 containerd[1498]: time="2025-07-09T10:13:30.561849735Z" level=info msg="StopContainer for \"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\" returns successfully" Jul 9 10:13:30.563121 containerd[1498]: time="2025-07-09T10:13:30.562818661Z" level=info msg="StopPodSandbox for \"f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e\"" Jul 9 10:13:30.563121 containerd[1498]: time="2025-07-09T10:13:30.562885296Z" level=info msg="Container to stop \"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 10:13:30.563121 containerd[1498]: time="2025-07-09T10:13:30.562898575Z" level=info msg="Container to stop \"550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 10:13:30.563121 containerd[1498]: time="2025-07-09T10:13:30.562908174Z" level=info msg="Container to stop \"41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 10:13:30.563121 containerd[1498]: time="2025-07-09T10:13:30.562917613Z" level=info msg="Container to stop \"c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 10:13:30.563121 containerd[1498]: time="2025-07-09T10:13:30.562927173Z" level=info msg="Container to stop \"b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 10:13:30.571693 systemd[1]: cri-containerd-f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e.scope: Deactivated successfully. Jul 9 10:13:30.580235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db20edd656d324ce4dce9ecdcde53db9a935e762ebeef48e159e2603ec1cf4ef-rootfs.mount: Deactivated successfully. Jul 9 10:13:30.582189 kubelet[2629]: E0709 10:13:30.582137 2629 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 9 10:13:30.583780 containerd[1498]: time="2025-07-09T10:13:30.583689951Z" level=info msg="shim disconnected" id=db20edd656d324ce4dce9ecdcde53db9a935e762ebeef48e159e2603ec1cf4ef namespace=k8s.io Jul 9 10:13:30.596042 containerd[1498]: time="2025-07-09T10:13:30.583724548Z" level=warning msg="cleaning up after shim disconnected" id=db20edd656d324ce4dce9ecdcde53db9a935e762ebeef48e159e2603ec1cf4ef namespace=k8s.io Jul 9 10:13:30.596260 containerd[1498]: time="2025-07-09T10:13:30.596187719Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 10:13:30.607598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e-rootfs.mount: Deactivated successfully. Jul 9 10:13:30.622589 containerd[1498]: time="2025-07-09T10:13:30.622441880Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e\" id:\"f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e\" pid:2782 exit_status:137 exited_at:{seconds:1752056010 nanos:578555782}" Jul 9 10:13:30.622893 containerd[1498]: time="2025-07-09T10:13:30.622858328Z" level=info msg="TearDown network for sandbox \"db20edd656d324ce4dce9ecdcde53db9a935e762ebeef48e159e2603ec1cf4ef\" successfully" Jul 9 10:13:30.622893 containerd[1498]: time="2025-07-09T10:13:30.622888725Z" level=info msg="StopPodSandbox for \"db20edd656d324ce4dce9ecdcde53db9a935e762ebeef48e159e2603ec1cf4ef\" returns successfully" Jul 9 10:13:30.624328 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db20edd656d324ce4dce9ecdcde53db9a935e762ebeef48e159e2603ec1cf4ef-shm.mount: Deactivated successfully. Jul 9 10:13:30.628995 containerd[1498]: time="2025-07-09T10:13:30.628961343Z" level=info msg="shim disconnected" id=f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e namespace=k8s.io Jul 9 10:13:30.629070 containerd[1498]: time="2025-07-09T10:13:30.628989941Z" level=warning msg="cleaning up after shim disconnected" id=f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e namespace=k8s.io Jul 9 10:13:30.629070 containerd[1498]: time="2025-07-09T10:13:30.629015419Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 10:13:30.629835 containerd[1498]: time="2025-07-09T10:13:30.629794599Z" level=info msg="received exit event sandbox_id:\"db20edd656d324ce4dce9ecdcde53db9a935e762ebeef48e159e2603ec1cf4ef\" exit_status:137 exited_at:{seconds:1752056010 nanos:549152382}" Jul 9 10:13:30.635002 containerd[1498]: time="2025-07-09T10:13:30.634969525Z" level=info msg="TearDown network for sandbox \"f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e\" successfully" Jul 9 10:13:30.635002 containerd[1498]: time="2025-07-09T10:13:30.635000603Z" level=info msg="StopPodSandbox for \"f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e\" returns successfully" Jul 9 10:13:30.635338 containerd[1498]: time="2025-07-09T10:13:30.635309699Z" level=info msg="received exit event sandbox_id:\"f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e\" exit_status:137 exited_at:{seconds:1752056010 nanos:578555782}" Jul 9 10:13:30.658468 kubelet[2629]: I0709 10:13:30.658431 2629 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f9a4a42-1dcf-4eac-a285-40b91c0f6177-cilium-config-path\") pod \"8f9a4a42-1dcf-4eac-a285-40b91c0f6177\" (UID: \"8f9a4a42-1dcf-4eac-a285-40b91c0f6177\") " Jul 9 10:13:30.658468 kubelet[2629]: I0709 10:13:30.658479 2629 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6t78\" (UniqueName: \"kubernetes.io/projected/8f9a4a42-1dcf-4eac-a285-40b91c0f6177-kube-api-access-j6t78\") pod \"8f9a4a42-1dcf-4eac-a285-40b91c0f6177\" (UID: \"8f9a4a42-1dcf-4eac-a285-40b91c0f6177\") " Jul 9 10:13:30.676354 kubelet[2629]: I0709 10:13:30.676274 2629 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f9a4a42-1dcf-4eac-a285-40b91c0f6177-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8f9a4a42-1dcf-4eac-a285-40b91c0f6177" (UID: "8f9a4a42-1dcf-4eac-a285-40b91c0f6177"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 10:13:30.677810 kubelet[2629]: I0709 10:13:30.677766 2629 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f9a4a42-1dcf-4eac-a285-40b91c0f6177-kube-api-access-j6t78" (OuterVolumeSpecName: "kube-api-access-j6t78") pod "8f9a4a42-1dcf-4eac-a285-40b91c0f6177" (UID: "8f9a4a42-1dcf-4eac-a285-40b91c0f6177"). InnerVolumeSpecName "kube-api-access-j6t78". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 10:13:30.728124 kubelet[2629]: I0709 10:13:30.727481 2629 scope.go:117] "RemoveContainer" containerID="b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc" Jul 9 10:13:30.731441 containerd[1498]: time="2025-07-09T10:13:30.731270670Z" level=info msg="RemoveContainer for \"b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc\"" Jul 9 10:13:30.737751 containerd[1498]: time="2025-07-09T10:13:30.737687022Z" level=info msg="RemoveContainer for \"b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc\" returns successfully" Jul 9 10:13:30.738015 systemd[1]: Removed slice kubepods-besteffort-pod8f9a4a42_1dcf_4eac_a285_40b91c0f6177.slice - libcontainer container kubepods-besteffort-pod8f9a4a42_1dcf_4eac_a285_40b91c0f6177.slice. Jul 9 10:13:30.738539 kubelet[2629]: I0709 10:13:30.738504 2629 scope.go:117] "RemoveContainer" containerID="b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc" Jul 9 10:13:30.739042 containerd[1498]: time="2025-07-09T10:13:30.738967604Z" level=error msg="ContainerStatus for \"b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc\": not found" Jul 9 10:13:30.739311 kubelet[2629]: E0709 10:13:30.739142 2629 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc\": not found" containerID="b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc" Jul 9 10:13:30.745106 kubelet[2629]: I0709 10:13:30.744992 2629 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc"} err="failed to get container status \"b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"b84dfa42dc9a30b82448ca31c2d7911bb375207eb3b3420038224959306d62fc\": not found" Jul 9 10:13:30.745106 kubelet[2629]: I0709 10:13:30.745111 2629 scope.go:117] "RemoveContainer" containerID="cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd" Jul 9 10:13:30.748293 containerd[1498]: time="2025-07-09T10:13:30.748013675Z" level=info msg="RemoveContainer for \"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\"" Jul 9 10:13:30.754056 containerd[1498]: time="2025-07-09T10:13:30.753957222Z" level=info msg="RemoveContainer for \"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\" returns successfully" Jul 9 10:13:30.754207 kubelet[2629]: I0709 10:13:30.754147 2629 scope.go:117] "RemoveContainer" containerID="b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8" Jul 9 10:13:30.755843 containerd[1498]: time="2025-07-09T10:13:30.755778724Z" level=info msg="RemoveContainer for \"b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8\"" Jul 9 10:13:30.759714 kubelet[2629]: I0709 10:13:30.759065 2629 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-host-proc-sys-net\") pod \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " Jul 9 10:13:30.759714 kubelet[2629]: I0709 10:13:30.759120 2629 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-lib-modules\") pod \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " Jul 9 10:13:30.759714 kubelet[2629]: I0709 10:13:30.759139 2629 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-etc-cni-netd\") pod \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " Jul 9 10:13:30.759714 kubelet[2629]: I0709 10:13:30.759155 2629 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-cilium-run\") pod \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " Jul 9 10:13:30.759714 kubelet[2629]: I0709 10:13:30.759171 2629 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-hostproc\") pod \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " Jul 9 10:13:30.759714 kubelet[2629]: I0709 10:13:30.759185 2629 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-cni-path\") pod \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " Jul 9 10:13:30.759914 kubelet[2629]: I0709 10:13:30.759204 2629 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-cilium-config-path\") pod \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " Jul 9 10:13:30.759914 kubelet[2629]: I0709 10:13:30.759223 2629 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-host-proc-sys-kernel\") pod \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " Jul 9 10:13:30.759914 kubelet[2629]: I0709 10:13:30.759241 2629 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-clustermesh-secrets\") pod \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " Jul 9 10:13:30.759914 kubelet[2629]: I0709 10:13:30.759255 2629 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-cilium-cgroup\") pod \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " Jul 9 10:13:30.759914 kubelet[2629]: I0709 10:13:30.759271 2629 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpmm4\" (UniqueName: \"kubernetes.io/projected/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-kube-api-access-fpmm4\") pod \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " Jul 9 10:13:30.759914 kubelet[2629]: I0709 10:13:30.759286 2629 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-xtables-lock\") pod \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " Jul 9 10:13:30.760451 containerd[1498]: time="2025-07-09T10:13:30.759774899Z" level=info msg="RemoveContainer for \"b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8\" returns successfully" Jul 9 10:13:30.760481 kubelet[2629]: I0709 10:13:30.759308 2629 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-hubble-tls\") pod \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " Jul 9 10:13:30.760481 kubelet[2629]: I0709 10:13:30.759322 2629 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-bpf-maps\") pod \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\" (UID: \"83e22362-d70b-4c48-bb8a-b6b0210d1ef7\") " Jul 9 10:13:30.760481 kubelet[2629]: I0709 10:13:30.759355 2629 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f9a4a42-1dcf-4eac-a285-40b91c0f6177-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 9 10:13:30.760481 kubelet[2629]: I0709 10:13:30.759364 2629 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j6t78\" (UniqueName: \"kubernetes.io/projected/8f9a4a42-1dcf-4eac-a285-40b91c0f6177-kube-api-access-j6t78\") on node \"localhost\" DevicePath \"\"" Jul 9 10:13:30.760481 kubelet[2629]: I0709 10:13:30.759404 2629 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "83e22362-d70b-4c48-bb8a-b6b0210d1ef7" (UID: "83e22362-d70b-4c48-bb8a-b6b0210d1ef7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:13:30.760481 kubelet[2629]: I0709 10:13:30.759430 2629 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "83e22362-d70b-4c48-bb8a-b6b0210d1ef7" (UID: "83e22362-d70b-4c48-bb8a-b6b0210d1ef7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:13:30.760633 kubelet[2629]: I0709 10:13:30.759444 2629 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "83e22362-d70b-4c48-bb8a-b6b0210d1ef7" (UID: "83e22362-d70b-4c48-bb8a-b6b0210d1ef7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:13:30.760633 kubelet[2629]: I0709 10:13:30.759457 2629 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "83e22362-d70b-4c48-bb8a-b6b0210d1ef7" (UID: "83e22362-d70b-4c48-bb8a-b6b0210d1ef7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:13:30.760633 kubelet[2629]: I0709 10:13:30.759468 2629 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "83e22362-d70b-4c48-bb8a-b6b0210d1ef7" (UID: "83e22362-d70b-4c48-bb8a-b6b0210d1ef7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:13:30.760633 kubelet[2629]: I0709 10:13:30.759482 2629 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-hostproc" (OuterVolumeSpecName: "hostproc") pod "83e22362-d70b-4c48-bb8a-b6b0210d1ef7" (UID: "83e22362-d70b-4c48-bb8a-b6b0210d1ef7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:13:30.760633 kubelet[2629]: I0709 10:13:30.759496 2629 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-cni-path" (OuterVolumeSpecName: "cni-path") pod "83e22362-d70b-4c48-bb8a-b6b0210d1ef7" (UID: "83e22362-d70b-4c48-bb8a-b6b0210d1ef7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:13:30.760763 kubelet[2629]: I0709 10:13:30.759804 2629 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "83e22362-d70b-4c48-bb8a-b6b0210d1ef7" (UID: "83e22362-d70b-4c48-bb8a-b6b0210d1ef7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:13:30.760763 kubelet[2629]: I0709 10:13:30.759831 2629 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "83e22362-d70b-4c48-bb8a-b6b0210d1ef7" (UID: "83e22362-d70b-4c48-bb8a-b6b0210d1ef7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:13:30.760763 kubelet[2629]: I0709 10:13:30.759990 2629 scope.go:117] "RemoveContainer" containerID="41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20" Jul 9 10:13:30.760763 kubelet[2629]: I0709 10:13:30.759991 2629 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "83e22362-d70b-4c48-bb8a-b6b0210d1ef7" (UID: "83e22362-d70b-4c48-bb8a-b6b0210d1ef7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:13:30.762249 kubelet[2629]: I0709 10:13:30.762215 2629 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "83e22362-d70b-4c48-bb8a-b6b0210d1ef7" (UID: "83e22362-d70b-4c48-bb8a-b6b0210d1ef7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 9 10:13:30.763560 kubelet[2629]: I0709 10:13:30.763508 2629 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-kube-api-access-fpmm4" (OuterVolumeSpecName: "kube-api-access-fpmm4") pod "83e22362-d70b-4c48-bb8a-b6b0210d1ef7" (UID: "83e22362-d70b-4c48-bb8a-b6b0210d1ef7"). InnerVolumeSpecName "kube-api-access-fpmm4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 10:13:30.764603 kubelet[2629]: I0709 10:13:30.763626 2629 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "83e22362-d70b-4c48-bb8a-b6b0210d1ef7" (UID: "83e22362-d70b-4c48-bb8a-b6b0210d1ef7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 10:13:30.764687 containerd[1498]: time="2025-07-09T10:13:30.763779354Z" level=info msg="RemoveContainer for \"41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20\"" Jul 9 10:13:30.765079 kubelet[2629]: I0709 10:13:30.765052 2629 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "83e22362-d70b-4c48-bb8a-b6b0210d1ef7" (UID: "83e22362-d70b-4c48-bb8a-b6b0210d1ef7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 10:13:30.767070 containerd[1498]: time="2025-07-09T10:13:30.767037506Z" level=info msg="RemoveContainer for \"41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20\" returns successfully" Jul 9 10:13:30.767333 kubelet[2629]: I0709 10:13:30.767311 2629 scope.go:117] "RemoveContainer" containerID="c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a" Jul 9 10:13:30.768835 containerd[1498]: time="2025-07-09T10:13:30.768809451Z" level=info msg="RemoveContainer for \"c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a\"" Jul 9 10:13:30.771559 containerd[1498]: time="2025-07-09T10:13:30.771517085Z" level=info msg="RemoveContainer for \"c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a\" returns successfully" Jul 9 10:13:30.771834 kubelet[2629]: I0709 10:13:30.771812 2629 scope.go:117] "RemoveContainer" containerID="550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087" Jul 9 10:13:30.773230 containerd[1498]: time="2025-07-09T10:13:30.773206076Z" level=info msg="RemoveContainer for \"550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087\"" Jul 9 10:13:30.775848 containerd[1498]: time="2025-07-09T10:13:30.775761642Z" level=info msg="RemoveContainer for \"550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087\" returns successfully" Jul 9 10:13:30.775948 kubelet[2629]: I0709 10:13:30.775898 2629 scope.go:117] "RemoveContainer" containerID="cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd" Jul 9 10:13:30.776099 containerd[1498]: time="2025-07-09T10:13:30.776065618Z" level=error msg="ContainerStatus for \"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\": not found" Jul 9 10:13:30.776196 kubelet[2629]: E0709 10:13:30.776178 2629 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\": not found" containerID="cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd" Jul 9 10:13:30.776225 kubelet[2629]: I0709 10:13:30.776205 2629 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd"} err="failed to get container status \"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb74e053c6a16099ea8dab02c5866ec444ea8bc0ab12d097e20ca1e51c90bedd\": not found" Jul 9 10:13:30.776248 kubelet[2629]: I0709 10:13:30.776227 2629 scope.go:117] "RemoveContainer" containerID="b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8" Jul 9 10:13:30.776389 containerd[1498]: time="2025-07-09T10:13:30.776363836Z" level=error msg="ContainerStatus for \"b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8\": not found" Jul 9 10:13:30.776512 kubelet[2629]: E0709 10:13:30.776490 2629 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8\": not found" containerID="b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8" Jul 9 10:13:30.776599 kubelet[2629]: I0709 10:13:30.776579 2629 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8"} err="failed to get container status \"b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6613f745926812040d923a83039a11f398136561f21b9d9395780e5ebb5fde8\": not found" Jul 9 10:13:30.776659 kubelet[2629]: I0709 10:13:30.776648 2629 scope.go:117] "RemoveContainer" containerID="41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20" Jul 9 10:13:30.776923 containerd[1498]: time="2025-07-09T10:13:30.776892635Z" level=error msg="ContainerStatus for \"41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20\": not found" Jul 9 10:13:30.777038 kubelet[2629]: E0709 10:13:30.777017 2629 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20\": not found" containerID="41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20" Jul 9 10:13:30.777072 kubelet[2629]: I0709 10:13:30.777046 2629 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20"} err="failed to get container status \"41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20\": rpc error: code = NotFound desc = an error occurred when try to find container \"41d5d67dd16c5ff6646f25e0eaacb03217658fa321fd5b4bfe968b0d7559bf20\": not found" Jul 9 10:13:30.777072 kubelet[2629]: I0709 10:13:30.777063 2629 scope.go:117] "RemoveContainer" containerID="c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a" Jul 9 10:13:30.777307 containerd[1498]: time="2025-07-09T10:13:30.777276966Z" level=error msg="ContainerStatus for \"c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a\": not found" Jul 9 10:13:30.777429 kubelet[2629]: E0709 10:13:30.777410 2629 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a\": not found" containerID="c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a" Jul 9 10:13:30.777459 kubelet[2629]: I0709 10:13:30.777434 2629 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a"} err="failed to get container status \"c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a\": rpc error: code = NotFound desc = an error occurred when try to find container \"c9bc71189a9d74cafb25ac0b471974c40d25822712afbfc66be9a301a8f7054a\": not found" Jul 9 10:13:30.777459 kubelet[2629]: I0709 10:13:30.777449 2629 scope.go:117] "RemoveContainer" containerID="550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087" Jul 9 10:13:30.777708 containerd[1498]: time="2025-07-09T10:13:30.777653098Z" level=error msg="ContainerStatus for \"550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087\": not found" Jul 9 10:13:30.777804 kubelet[2629]: E0709 10:13:30.777785 2629 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087\": not found" containerID="550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087" Jul 9 10:13:30.777846 kubelet[2629]: I0709 10:13:30.777807 2629 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087"} err="failed to get container status \"550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087\": rpc error: code = NotFound desc = an error occurred when try to find container \"550b5a03e2601484a348160a2a4b7280ab965a5ffd10cc45cbbcfeeb82657087\": not found" Jul 9 10:13:30.860480 kubelet[2629]: I0709 10:13:30.860328 2629 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 9 10:13:30.860480 kubelet[2629]: I0709 10:13:30.860361 2629 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 9 10:13:30.860480 kubelet[2629]: I0709 10:13:30.860370 2629 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 9 10:13:30.860480 kubelet[2629]: I0709 10:13:30.860380 2629 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fpmm4\" (UniqueName: \"kubernetes.io/projected/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-kube-api-access-fpmm4\") on node \"localhost\" DevicePath \"\"" Jul 9 10:13:30.860480 kubelet[2629]: I0709 10:13:30.860389 2629 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 9 10:13:30.860480 kubelet[2629]: I0709 10:13:30.860397 2629 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 9 10:13:30.860480 kubelet[2629]: I0709 10:13:30.860405 2629 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 9 10:13:30.860480 kubelet[2629]: I0709 10:13:30.860413 2629 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 9 10:13:30.860787 kubelet[2629]: I0709 10:13:30.860421 2629 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 9 10:13:30.860787 kubelet[2629]: I0709 10:13:30.860428 2629 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 9 10:13:30.860787 kubelet[2629]: I0709 10:13:30.860435 2629 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 9 10:13:30.860787 kubelet[2629]: I0709 10:13:30.860444 2629 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 9 10:13:30.860787 kubelet[2629]: I0709 10:13:30.860452 2629 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 9 10:13:30.860787 kubelet[2629]: I0709 10:13:30.860459 2629 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83e22362-d70b-4c48-bb8a-b6b0210d1ef7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 9 10:13:31.040855 systemd[1]: Removed slice kubepods-burstable-pod83e22362_d70b_4c48_bb8a_b6b0210d1ef7.slice - libcontainer container kubepods-burstable-pod83e22362_d70b_4c48_bb8a_b6b0210d1ef7.slice. Jul 9 10:13:31.040961 systemd[1]: kubepods-burstable-pod83e22362_d70b_4c48_bb8a_b6b0210d1ef7.slice: Consumed 6.693s CPU time, 121.4M memory peak, 132K read from disk, 12.9M written to disk. Jul 9 10:13:31.501757 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f07b211f76f7f4d524fd020a72eaeda44192b02a9b4572726964458128b9603e-shm.mount: Deactivated successfully. Jul 9 10:13:31.501919 systemd[1]: var-lib-kubelet-pods-8f9a4a42\x2d1dcf\x2d4eac\x2da285\x2d40b91c0f6177-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj6t78.mount: Deactivated successfully. Jul 9 10:13:31.501982 systemd[1]: var-lib-kubelet-pods-83e22362\x2dd70b\x2d4c48\x2dbb8a\x2db6b0210d1ef7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfpmm4.mount: Deactivated successfully. Jul 9 10:13:31.502035 systemd[1]: var-lib-kubelet-pods-83e22362\x2dd70b\x2d4c48\x2dbb8a\x2db6b0210d1ef7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 9 10:13:31.502087 systemd[1]: var-lib-kubelet-pods-83e22362\x2dd70b\x2d4c48\x2dbb8a\x2db6b0210d1ef7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 9 10:13:32.381797 kubelet[2629]: I0709 10:13:32.381741 2629 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-09T10:13:32Z","lastTransitionTime":"2025-07-09T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 9 10:13:32.397488 sshd[4209]: Connection closed by 10.0.0.1 port 45454 Jul 9 10:13:32.398057 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:32.416691 systemd[1]: sshd@21-10.0.0.140:22-10.0.0.1:45454.service: Deactivated successfully. Jul 9 10:13:32.418503 systemd[1]: session-22.scope: Deactivated successfully. Jul 9 10:13:32.418757 systemd[1]: session-22.scope: Consumed 1.645s CPU time, 26.2M memory peak. Jul 9 10:13:32.419334 systemd-logind[1483]: Session 22 logged out. Waiting for processes to exit. Jul 9 10:13:32.421610 systemd-logind[1483]: Removed session 22. Jul 9 10:13:32.423089 systemd[1]: Started sshd@22-10.0.0.140:22-10.0.0.1:45462.service - OpenSSH per-connection server daemon (10.0.0.1:45462). Jul 9 10:13:32.481569 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 45462 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:32.482854 sshd-session[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:32.487298 systemd-logind[1483]: New session 23 of user core. Jul 9 10:13:32.497815 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 9 10:13:32.523710 kubelet[2629]: I0709 10:13:32.523512 2629 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83e22362-d70b-4c48-bb8a-b6b0210d1ef7" path="/var/lib/kubelet/pods/83e22362-d70b-4c48-bb8a-b6b0210d1ef7/volumes" Jul 9 10:13:32.524236 kubelet[2629]: I0709 10:13:32.524211 2629 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f9a4a42-1dcf-4eac-a285-40b91c0f6177" path="/var/lib/kubelet/pods/8f9a4a42-1dcf-4eac-a285-40b91c0f6177/volumes" Jul 9 10:13:33.318741 sshd[4362]: Connection closed by 10.0.0.1 port 45462 Jul 9 10:13:33.318113 sshd-session[4359]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:33.329310 systemd[1]: sshd@22-10.0.0.140:22-10.0.0.1:45462.service: Deactivated successfully. Jul 9 10:13:33.333292 systemd[1]: session-23.scope: Deactivated successfully. Jul 9 10:13:33.334670 systemd-logind[1483]: Session 23 logged out. Waiting for processes to exit. Jul 9 10:13:33.339892 systemd-logind[1483]: Removed session 23. Jul 9 10:13:33.342833 systemd[1]: Started sshd@23-10.0.0.140:22-10.0.0.1:59300.service - OpenSSH per-connection server daemon (10.0.0.1:59300). Jul 9 10:13:33.345216 kubelet[2629]: I0709 10:13:33.344152 2629 memory_manager.go:355] "RemoveStaleState removing state" podUID="83e22362-d70b-4c48-bb8a-b6b0210d1ef7" containerName="cilium-agent" Jul 9 10:13:33.345216 kubelet[2629]: I0709 10:13:33.344201 2629 memory_manager.go:355] "RemoveStaleState removing state" podUID="8f9a4a42-1dcf-4eac-a285-40b91c0f6177" containerName="cilium-operator" Jul 9 10:13:33.359014 systemd[1]: Created slice kubepods-burstable-pod939752cd_6443_4b4a_8cd4_283b4ab2fc84.slice - libcontainer container kubepods-burstable-pod939752cd_6443_4b4a_8cd4_283b4ab2fc84.slice. Jul 9 10:13:33.375627 kubelet[2629]: I0709 10:13:33.375581 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/939752cd-6443-4b4a-8cd4-283b4ab2fc84-bpf-maps\") pod \"cilium-bg76g\" (UID: \"939752cd-6443-4b4a-8cd4-283b4ab2fc84\") " pod="kube-system/cilium-bg76g" Jul 9 10:13:33.375627 kubelet[2629]: I0709 10:13:33.375619 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/939752cd-6443-4b4a-8cd4-283b4ab2fc84-xtables-lock\") pod \"cilium-bg76g\" (UID: \"939752cd-6443-4b4a-8cd4-283b4ab2fc84\") " pod="kube-system/cilium-bg76g" Jul 9 10:13:33.375807 kubelet[2629]: I0709 10:13:33.375642 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/939752cd-6443-4b4a-8cd4-283b4ab2fc84-clustermesh-secrets\") pod \"cilium-bg76g\" (UID: \"939752cd-6443-4b4a-8cd4-283b4ab2fc84\") " pod="kube-system/cilium-bg76g" Jul 9 10:13:33.375807 kubelet[2629]: I0709 10:13:33.375658 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/939752cd-6443-4b4a-8cd4-283b4ab2fc84-host-proc-sys-net\") pod \"cilium-bg76g\" (UID: \"939752cd-6443-4b4a-8cd4-283b4ab2fc84\") " pod="kube-system/cilium-bg76g" Jul 9 10:13:33.375807 kubelet[2629]: I0709 10:13:33.375751 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/939752cd-6443-4b4a-8cd4-283b4ab2fc84-cilium-cgroup\") pod \"cilium-bg76g\" (UID: \"939752cd-6443-4b4a-8cd4-283b4ab2fc84\") " pod="kube-system/cilium-bg76g" Jul 9 10:13:33.375807 kubelet[2629]: I0709 10:13:33.375771 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/939752cd-6443-4b4a-8cd4-283b4ab2fc84-cni-path\") pod \"cilium-bg76g\" (UID: \"939752cd-6443-4b4a-8cd4-283b4ab2fc84\") " pod="kube-system/cilium-bg76g" Jul 9 10:13:33.375807 kubelet[2629]: I0709 10:13:33.375807 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/939752cd-6443-4b4a-8cd4-283b4ab2fc84-cilium-config-path\") pod \"cilium-bg76g\" (UID: \"939752cd-6443-4b4a-8cd4-283b4ab2fc84\") " pod="kube-system/cilium-bg76g" Jul 9 10:13:33.376026 kubelet[2629]: I0709 10:13:33.375824 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/939752cd-6443-4b4a-8cd4-283b4ab2fc84-hostproc\") pod \"cilium-bg76g\" (UID: \"939752cd-6443-4b4a-8cd4-283b4ab2fc84\") " pod="kube-system/cilium-bg76g" Jul 9 10:13:33.376026 kubelet[2629]: I0709 10:13:33.375858 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/939752cd-6443-4b4a-8cd4-283b4ab2fc84-lib-modules\") pod \"cilium-bg76g\" (UID: \"939752cd-6443-4b4a-8cd4-283b4ab2fc84\") " pod="kube-system/cilium-bg76g" Jul 9 10:13:33.376026 kubelet[2629]: I0709 10:13:33.375886 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/939752cd-6443-4b4a-8cd4-283b4ab2fc84-host-proc-sys-kernel\") pod \"cilium-bg76g\" (UID: \"939752cd-6443-4b4a-8cd4-283b4ab2fc84\") " pod="kube-system/cilium-bg76g" Jul 9 10:13:33.376026 kubelet[2629]: I0709 10:13:33.375900 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/939752cd-6443-4b4a-8cd4-283b4ab2fc84-hubble-tls\") pod \"cilium-bg76g\" (UID: \"939752cd-6443-4b4a-8cd4-283b4ab2fc84\") " pod="kube-system/cilium-bg76g" Jul 9 10:13:33.376026 kubelet[2629]: I0709 10:13:33.375936 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/939752cd-6443-4b4a-8cd4-283b4ab2fc84-cilium-run\") pod \"cilium-bg76g\" (UID: \"939752cd-6443-4b4a-8cd4-283b4ab2fc84\") " pod="kube-system/cilium-bg76g" Jul 9 10:13:33.376026 kubelet[2629]: I0709 10:13:33.375963 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/939752cd-6443-4b4a-8cd4-283b4ab2fc84-etc-cni-netd\") pod \"cilium-bg76g\" (UID: \"939752cd-6443-4b4a-8cd4-283b4ab2fc84\") " pod="kube-system/cilium-bg76g" Jul 9 10:13:33.376149 kubelet[2629]: I0709 10:13:33.375979 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/939752cd-6443-4b4a-8cd4-283b4ab2fc84-cilium-ipsec-secrets\") pod \"cilium-bg76g\" (UID: \"939752cd-6443-4b4a-8cd4-283b4ab2fc84\") " pod="kube-system/cilium-bg76g" Jul 9 10:13:33.376149 kubelet[2629]: I0709 10:13:33.376000 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wxt5\" (UniqueName: \"kubernetes.io/projected/939752cd-6443-4b4a-8cd4-283b4ab2fc84-kube-api-access-2wxt5\") pod \"cilium-bg76g\" (UID: \"939752cd-6443-4b4a-8cd4-283b4ab2fc84\") " pod="kube-system/cilium-bg76g" Jul 9 10:13:33.403241 sshd[4374]: Accepted publickey for core from 10.0.0.1 port 59300 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:33.404462 sshd-session[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:33.408413 systemd-logind[1483]: New session 24 of user core. Jul 9 10:13:33.417821 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 9 10:13:33.468090 sshd[4377]: Connection closed by 10.0.0.1 port 59300 Jul 9 10:13:33.468493 sshd-session[4374]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:33.491209 systemd[1]: sshd@23-10.0.0.140:22-10.0.0.1:59300.service: Deactivated successfully. Jul 9 10:13:33.497022 systemd[1]: session-24.scope: Deactivated successfully. Jul 9 10:13:33.498049 systemd-logind[1483]: Session 24 logged out. Waiting for processes to exit. Jul 9 10:13:33.500605 systemd[1]: Started sshd@24-10.0.0.140:22-10.0.0.1:59312.service - OpenSSH per-connection server daemon (10.0.0.1:59312). Jul 9 10:13:33.501161 systemd-logind[1483]: Removed session 24. Jul 9 10:13:33.552116 sshd[4388]: Accepted publickey for core from 10.0.0.1 port 59312 ssh2: RSA SHA256:r5pv4CxD4ouoBCRIaURqtjo6IXzmDq3oyyJedob6mn4 Jul 9 10:13:33.553501 sshd-session[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:13:33.560102 systemd-logind[1483]: New session 25 of user core. Jul 9 10:13:33.569850 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 9 10:13:33.665437 containerd[1498]: time="2025-07-09T10:13:33.665150162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bg76g,Uid:939752cd-6443-4b4a-8cd4-283b4ab2fc84,Namespace:kube-system,Attempt:0,}" Jul 9 10:13:33.692718 containerd[1498]: time="2025-07-09T10:13:33.692508827Z" level=info msg="connecting to shim 8daa84939b60ab7d4c78a616be8a01dba0d8cf3d7eca457f3e722db8d3a459ac" address="unix:///run/containerd/s/1c0a455bd32970cd279fbb5cfb523a254aba03febd16c38e9cf65d2da1377c66" namespace=k8s.io protocol=ttrpc version=3 Jul 9 10:13:33.723956 systemd[1]: Started cri-containerd-8daa84939b60ab7d4c78a616be8a01dba0d8cf3d7eca457f3e722db8d3a459ac.scope - libcontainer container 8daa84939b60ab7d4c78a616be8a01dba0d8cf3d7eca457f3e722db8d3a459ac. Jul 9 10:13:33.748297 containerd[1498]: time="2025-07-09T10:13:33.748230214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bg76g,Uid:939752cd-6443-4b4a-8cd4-283b4ab2fc84,Namespace:kube-system,Attempt:0,} returns sandbox id \"8daa84939b60ab7d4c78a616be8a01dba0d8cf3d7eca457f3e722db8d3a459ac\"" Jul 9 10:13:33.751889 containerd[1498]: time="2025-07-09T10:13:33.751833340Z" level=info msg="CreateContainer within sandbox \"8daa84939b60ab7d4c78a616be8a01dba0d8cf3d7eca457f3e722db8d3a459ac\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 10:13:33.778009 containerd[1498]: time="2025-07-09T10:13:33.777363685Z" level=info msg="Container baca36ec7980f042ee1b46bcf0ad569dfc188cfe2f560d79fb7b73b02cf41ae1: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:13:33.783366 containerd[1498]: time="2025-07-09T10:13:33.783291100Z" level=info msg="CreateContainer within sandbox \"8daa84939b60ab7d4c78a616be8a01dba0d8cf3d7eca457f3e722db8d3a459ac\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"baca36ec7980f042ee1b46bcf0ad569dfc188cfe2f560d79fb7b73b02cf41ae1\"" Jul 9 10:13:33.784176 containerd[1498]: time="2025-07-09T10:13:33.783913460Z" level=info msg="StartContainer for \"baca36ec7980f042ee1b46bcf0ad569dfc188cfe2f560d79fb7b73b02cf41ae1\"" Jul 9 10:13:33.785015 containerd[1498]: time="2025-07-09T10:13:33.784946233Z" level=info msg="connecting to shim baca36ec7980f042ee1b46bcf0ad569dfc188cfe2f560d79fb7b73b02cf41ae1" address="unix:///run/containerd/s/1c0a455bd32970cd279fbb5cfb523a254aba03febd16c38e9cf65d2da1377c66" protocol=ttrpc version=3 Jul 9 10:13:33.810869 systemd[1]: Started cri-containerd-baca36ec7980f042ee1b46bcf0ad569dfc188cfe2f560d79fb7b73b02cf41ae1.scope - libcontainer container baca36ec7980f042ee1b46bcf0ad569dfc188cfe2f560d79fb7b73b02cf41ae1. Jul 9 10:13:33.836918 containerd[1498]: time="2025-07-09T10:13:33.836809030Z" level=info msg="StartContainer for \"baca36ec7980f042ee1b46bcf0ad569dfc188cfe2f560d79fb7b73b02cf41ae1\" returns successfully" Jul 9 10:13:33.863447 systemd[1]: cri-containerd-baca36ec7980f042ee1b46bcf0ad569dfc188cfe2f560d79fb7b73b02cf41ae1.scope: Deactivated successfully. Jul 9 10:13:33.866373 containerd[1498]: time="2025-07-09T10:13:33.866266759Z" level=info msg="received exit event container_id:\"baca36ec7980f042ee1b46bcf0ad569dfc188cfe2f560d79fb7b73b02cf41ae1\" id:\"baca36ec7980f042ee1b46bcf0ad569dfc188cfe2f560d79fb7b73b02cf41ae1\" pid:4457 exited_at:{seconds:1752056013 nanos:865920582}" Jul 9 10:13:33.866692 containerd[1498]: time="2025-07-09T10:13:33.866648374Z" level=info msg="TaskExit event in podsandbox handler container_id:\"baca36ec7980f042ee1b46bcf0ad569dfc188cfe2f560d79fb7b73b02cf41ae1\" id:\"baca36ec7980f042ee1b46bcf0ad569dfc188cfe2f560d79fb7b73b02cf41ae1\" pid:4457 exited_at:{seconds:1752056013 nanos:865920582}" Jul 9 10:13:34.752128 containerd[1498]: time="2025-07-09T10:13:34.752083330Z" level=info msg="CreateContainer within sandbox \"8daa84939b60ab7d4c78a616be8a01dba0d8cf3d7eca457f3e722db8d3a459ac\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 10:13:34.762644 containerd[1498]: time="2025-07-09T10:13:34.761999802Z" level=info msg="Container 4aff0941f1b4fe76063739bb092e1893df9e4cf95f3d9b1f10d72c035462d727: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:13:34.769263 containerd[1498]: time="2025-07-09T10:13:34.769043970Z" level=info msg="CreateContainer within sandbox \"8daa84939b60ab7d4c78a616be8a01dba0d8cf3d7eca457f3e722db8d3a459ac\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4aff0941f1b4fe76063739bb092e1893df9e4cf95f3d9b1f10d72c035462d727\"" Jul 9 10:13:34.770827 containerd[1498]: time="2025-07-09T10:13:34.770778303Z" level=info msg="StartContainer for \"4aff0941f1b4fe76063739bb092e1893df9e4cf95f3d9b1f10d72c035462d727\"" Jul 9 10:13:34.771736 containerd[1498]: time="2025-07-09T10:13:34.771665289Z" level=info msg="connecting to shim 4aff0941f1b4fe76063739bb092e1893df9e4cf95f3d9b1f10d72c035462d727" address="unix:///run/containerd/s/1c0a455bd32970cd279fbb5cfb523a254aba03febd16c38e9cf65d2da1377c66" protocol=ttrpc version=3 Jul 9 10:13:34.800901 systemd[1]: Started cri-containerd-4aff0941f1b4fe76063739bb092e1893df9e4cf95f3d9b1f10d72c035462d727.scope - libcontainer container 4aff0941f1b4fe76063739bb092e1893df9e4cf95f3d9b1f10d72c035462d727. Jul 9 10:13:34.829632 containerd[1498]: time="2025-07-09T10:13:34.829592577Z" level=info msg="StartContainer for \"4aff0941f1b4fe76063739bb092e1893df9e4cf95f3d9b1f10d72c035462d727\" returns successfully" Jul 9 10:13:34.871019 systemd[1]: cri-containerd-4aff0941f1b4fe76063739bb092e1893df9e4cf95f3d9b1f10d72c035462d727.scope: Deactivated successfully. Jul 9 10:13:34.871942 containerd[1498]: time="2025-07-09T10:13:34.871869905Z" level=info msg="received exit event container_id:\"4aff0941f1b4fe76063739bb092e1893df9e4cf95f3d9b1f10d72c035462d727\" id:\"4aff0941f1b4fe76063739bb092e1893df9e4cf95f3d9b1f10d72c035462d727\" pid:4504 exited_at:{seconds:1752056014 nanos:871654519}" Jul 9 10:13:34.872259 containerd[1498]: time="2025-07-09T10:13:34.872182686Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4aff0941f1b4fe76063739bb092e1893df9e4cf95f3d9b1f10d72c035462d727\" id:\"4aff0941f1b4fe76063739bb092e1893df9e4cf95f3d9b1f10d72c035462d727\" pid:4504 exited_at:{seconds:1752056014 nanos:871654519}" Jul 9 10:13:35.487956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4aff0941f1b4fe76063739bb092e1893df9e4cf95f3d9b1f10d72c035462d727-rootfs.mount: Deactivated successfully. Jul 9 10:13:35.583911 kubelet[2629]: E0709 10:13:35.583821 2629 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 9 10:13:35.757594 containerd[1498]: time="2025-07-09T10:13:35.757188176Z" level=info msg="CreateContainer within sandbox \"8daa84939b60ab7d4c78a616be8a01dba0d8cf3d7eca457f3e722db8d3a459ac\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 10:13:35.771819 containerd[1498]: time="2025-07-09T10:13:35.769631656Z" level=info msg="Container 9e5552ea05ca5a953f089e5fede43088027b67fbc44626c9627097d86a54d1cb: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:13:35.778418 containerd[1498]: time="2025-07-09T10:13:35.778364030Z" level=info msg="CreateContainer within sandbox \"8daa84939b60ab7d4c78a616be8a01dba0d8cf3d7eca457f3e722db8d3a459ac\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9e5552ea05ca5a953f089e5fede43088027b67fbc44626c9627097d86a54d1cb\"" Jul 9 10:13:35.778950 containerd[1498]: time="2025-07-09T10:13:35.778921558Z" level=info msg="StartContainer for \"9e5552ea05ca5a953f089e5fede43088027b67fbc44626c9627097d86a54d1cb\"" Jul 9 10:13:35.790765 containerd[1498]: time="2025-07-09T10:13:35.789860525Z" level=info msg="connecting to shim 9e5552ea05ca5a953f089e5fede43088027b67fbc44626c9627097d86a54d1cb" address="unix:///run/containerd/s/1c0a455bd32970cd279fbb5cfb523a254aba03febd16c38e9cf65d2da1377c66" protocol=ttrpc version=3 Jul 9 10:13:35.817879 systemd[1]: Started cri-containerd-9e5552ea05ca5a953f089e5fede43088027b67fbc44626c9627097d86a54d1cb.scope - libcontainer container 9e5552ea05ca5a953f089e5fede43088027b67fbc44626c9627097d86a54d1cb. Jul 9 10:13:35.852977 systemd[1]: cri-containerd-9e5552ea05ca5a953f089e5fede43088027b67fbc44626c9627097d86a54d1cb.scope: Deactivated successfully. Jul 9 10:13:35.855431 containerd[1498]: time="2025-07-09T10:13:35.855401371Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e5552ea05ca5a953f089e5fede43088027b67fbc44626c9627097d86a54d1cb\" id:\"9e5552ea05ca5a953f089e5fede43088027b67fbc44626c9627097d86a54d1cb\" pid:4549 exited_at:{seconds:1752056015 nanos:854899080}" Jul 9 10:13:35.855564 containerd[1498]: time="2025-07-09T10:13:35.855533803Z" level=info msg="received exit event container_id:\"9e5552ea05ca5a953f089e5fede43088027b67fbc44626c9627097d86a54d1cb\" id:\"9e5552ea05ca5a953f089e5fede43088027b67fbc44626c9627097d86a54d1cb\" pid:4549 exited_at:{seconds:1752056015 nanos:854899080}" Jul 9 10:13:35.856479 containerd[1498]: time="2025-07-09T10:13:35.856437631Z" level=info msg="StartContainer for \"9e5552ea05ca5a953f089e5fede43088027b67fbc44626c9627097d86a54d1cb\" returns successfully" Jul 9 10:13:35.880035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e5552ea05ca5a953f089e5fede43088027b67fbc44626c9627097d86a54d1cb-rootfs.mount: Deactivated successfully. Jul 9 10:13:36.760450 containerd[1498]: time="2025-07-09T10:13:36.760393747Z" level=info msg="CreateContainer within sandbox \"8daa84939b60ab7d4c78a616be8a01dba0d8cf3d7eca457f3e722db8d3a459ac\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 10:13:36.768073 containerd[1498]: time="2025-07-09T10:13:36.767395965Z" level=info msg="Container 82e07a2916acdc17ac8718fb02edea86b691c2926ecee4fcdb3011188d63d7a4: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:13:36.775567 containerd[1498]: time="2025-07-09T10:13:36.775523722Z" level=info msg="CreateContainer within sandbox \"8daa84939b60ab7d4c78a616be8a01dba0d8cf3d7eca457f3e722db8d3a459ac\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"82e07a2916acdc17ac8718fb02edea86b691c2926ecee4fcdb3011188d63d7a4\"" Jul 9 10:13:36.776707 containerd[1498]: time="2025-07-09T10:13:36.776632661Z" level=info msg="StartContainer for \"82e07a2916acdc17ac8718fb02edea86b691c2926ecee4fcdb3011188d63d7a4\"" Jul 9 10:13:36.779902 containerd[1498]: time="2025-07-09T10:13:36.779867485Z" level=info msg="connecting to shim 82e07a2916acdc17ac8718fb02edea86b691c2926ecee4fcdb3011188d63d7a4" address="unix:///run/containerd/s/1c0a455bd32970cd279fbb5cfb523a254aba03febd16c38e9cf65d2da1377c66" protocol=ttrpc version=3 Jul 9 10:13:36.797830 systemd[1]: Started cri-containerd-82e07a2916acdc17ac8718fb02edea86b691c2926ecee4fcdb3011188d63d7a4.scope - libcontainer container 82e07a2916acdc17ac8718fb02edea86b691c2926ecee4fcdb3011188d63d7a4. Jul 9 10:13:36.819825 systemd[1]: cri-containerd-82e07a2916acdc17ac8718fb02edea86b691c2926ecee4fcdb3011188d63d7a4.scope: Deactivated successfully. Jul 9 10:13:36.821163 containerd[1498]: time="2025-07-09T10:13:36.821115874Z" level=info msg="TaskExit event in podsandbox handler container_id:\"82e07a2916acdc17ac8718fb02edea86b691c2926ecee4fcdb3011188d63d7a4\" id:\"82e07a2916acdc17ac8718fb02edea86b691c2926ecee4fcdb3011188d63d7a4\" pid:4587 exited_at:{seconds:1752056016 nanos:820869248}" Jul 9 10:13:36.823155 containerd[1498]: time="2025-07-09T10:13:36.823092367Z" level=info msg="received exit event container_id:\"82e07a2916acdc17ac8718fb02edea86b691c2926ecee4fcdb3011188d63d7a4\" id:\"82e07a2916acdc17ac8718fb02edea86b691c2926ecee4fcdb3011188d63d7a4\" pid:4587 exited_at:{seconds:1752056016 nanos:820869248}" Jul 9 10:13:36.830152 containerd[1498]: time="2025-07-09T10:13:36.830095904Z" level=info msg="StartContainer for \"82e07a2916acdc17ac8718fb02edea86b691c2926ecee4fcdb3011188d63d7a4\" returns successfully" Jul 9 10:13:36.843826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82e07a2916acdc17ac8718fb02edea86b691c2926ecee4fcdb3011188d63d7a4-rootfs.mount: Deactivated successfully. Jul 9 10:13:37.766624 containerd[1498]: time="2025-07-09T10:13:37.766501353Z" level=info msg="CreateContainer within sandbox \"8daa84939b60ab7d4c78a616be8a01dba0d8cf3d7eca457f3e722db8d3a459ac\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 10:13:37.777692 containerd[1498]: time="2025-07-09T10:13:37.777218163Z" level=info msg="Container 4d674134ead421e0d78e91fec667b1c723a5bcae1548ebcf808aed1b0c3bf30c: CDI devices from CRI Config.CDIDevices: []" Jul 9 10:13:37.785777 containerd[1498]: time="2025-07-09T10:13:37.785738125Z" level=info msg="CreateContainer within sandbox \"8daa84939b60ab7d4c78a616be8a01dba0d8cf3d7eca457f3e722db8d3a459ac\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4d674134ead421e0d78e91fec667b1c723a5bcae1548ebcf808aed1b0c3bf30c\"" Jul 9 10:13:37.786433 containerd[1498]: time="2025-07-09T10:13:37.786395571Z" level=info msg="StartContainer for \"4d674134ead421e0d78e91fec667b1c723a5bcae1548ebcf808aed1b0c3bf30c\"" Jul 9 10:13:37.787291 containerd[1498]: time="2025-07-09T10:13:37.787267687Z" level=info msg="connecting to shim 4d674134ead421e0d78e91fec667b1c723a5bcae1548ebcf808aed1b0c3bf30c" address="unix:///run/containerd/s/1c0a455bd32970cd279fbb5cfb523a254aba03febd16c38e9cf65d2da1377c66" protocol=ttrpc version=3 Jul 9 10:13:37.815853 systemd[1]: Started cri-containerd-4d674134ead421e0d78e91fec667b1c723a5bcae1548ebcf808aed1b0c3bf30c.scope - libcontainer container 4d674134ead421e0d78e91fec667b1c723a5bcae1548ebcf808aed1b0c3bf30c. Jul 9 10:13:37.845666 containerd[1498]: time="2025-07-09T10:13:37.845624690Z" level=info msg="StartContainer for \"4d674134ead421e0d78e91fec667b1c723a5bcae1548ebcf808aed1b0c3bf30c\" returns successfully" Jul 9 10:13:37.899605 containerd[1498]: time="2025-07-09T10:13:37.899562241Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d674134ead421e0d78e91fec667b1c723a5bcae1548ebcf808aed1b0c3bf30c\" id:\"f44a53554b4189a66d6e65f9c117b35173bcbff3f1cfde57a95bc62d0f90b180\" pid:4654 exited_at:{seconds:1752056017 nanos:899288135}" Jul 9 10:13:38.141694 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 9 10:13:38.790579 kubelet[2629]: I0709 10:13:38.790511 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bg76g" podStartSLOduration=5.790496356 podStartE2EDuration="5.790496356s" podCreationTimestamp="2025-07-09 10:13:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:13:38.78915922 +0000 UTC m=+78.335307881" watchObservedRunningTime="2025-07-09 10:13:38.790496356 +0000 UTC m=+78.336645017" Jul 9 10:13:39.955006 containerd[1498]: time="2025-07-09T10:13:39.954766848Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d674134ead421e0d78e91fec667b1c723a5bcae1548ebcf808aed1b0c3bf30c\" id:\"a2ae5f78629363d96f61243e2a21d0b1f7bc3b34180d41b611d5bc18ba26b353\" pid:4826 exit_status:1 exited_at:{seconds:1752056019 nanos:954373225}" Jul 9 10:13:40.997503 systemd-networkd[1436]: lxc_health: Link UP Jul 9 10:13:40.997792 systemd-networkd[1436]: lxc_health: Gained carrier Jul 9 10:13:42.087054 containerd[1498]: time="2025-07-09T10:13:42.086741843Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d674134ead421e0d78e91fec667b1c723a5bcae1548ebcf808aed1b0c3bf30c\" id:\"a63957b3c85497a835de123b63a944f964ddb94acfef5b456c4fc82ec9c4841b\" pid:5203 exited_at:{seconds:1752056022 nanos:86274140}" Jul 9 10:13:43.004457 systemd-networkd[1436]: lxc_health: Gained IPv6LL Jul 9 10:13:44.225263 containerd[1498]: time="2025-07-09T10:13:44.225213909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d674134ead421e0d78e91fec667b1c723a5bcae1548ebcf808aed1b0c3bf30c\" id:\"cb97e2b1a510c7cd0a61d210fcc037f25a1059ea6e28c0ab859867eb83f12ff2\" pid:5233 exited_at:{seconds:1752056024 nanos:224761243}" Jul 9 10:13:46.342557 containerd[1498]: time="2025-07-09T10:13:46.342435500Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d674134ead421e0d78e91fec667b1c723a5bcae1548ebcf808aed1b0c3bf30c\" id:\"c2042b86ffbd0dbcb1d6813dc42641dd9e535825fcdb828902696b15cde6da87\" pid:5264 exited_at:{seconds:1752056026 nanos:342001911}" Jul 9 10:13:46.357133 sshd[4391]: Connection closed by 10.0.0.1 port 59312 Jul 9 10:13:46.357838 sshd-session[4388]: pam_unix(sshd:session): session closed for user core Jul 9 10:13:46.362194 systemd-logind[1483]: Session 25 logged out. Waiting for processes to exit. Jul 9 10:13:46.363197 systemd[1]: sshd@24-10.0.0.140:22-10.0.0.1:59312.service: Deactivated successfully. Jul 9 10:13:46.365412 systemd[1]: session-25.scope: Deactivated successfully. Jul 9 10:13:46.366778 systemd-logind[1483]: Removed session 25.