Jul 8 09:55:16.764783 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 8 09:55:16.764803 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 8 08:29:21 -00 2025 Jul 8 09:55:16.764813 kernel: KASLR enabled Jul 8 09:55:16.764829 kernel: efi: EFI v2.7 by EDK II Jul 8 09:55:16.764836 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 8 09:55:16.764841 kernel: random: crng init done Jul 8 09:55:16.764848 kernel: secureboot: Secure boot disabled Jul 8 09:55:16.764854 kernel: ACPI: Early table checksum verification disabled Jul 8 09:55:16.764860 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 8 09:55:16.764869 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 8 09:55:16.764875 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 09:55:16.764881 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 09:55:16.764887 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 09:55:16.764894 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 09:55:16.764901 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 09:55:16.764908 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 09:55:16.764915 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 09:55:16.764921 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 09:55:16.764928 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 09:55:16.764934 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 8 09:55:16.764940 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 8 09:55:16.764947 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 8 09:55:16.764953 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Jul 8 09:55:16.764960 kernel: Zone ranges: Jul 8 09:55:16.764966 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 8 09:55:16.764974 kernel: DMA32 empty Jul 8 09:55:16.764980 kernel: Normal empty Jul 8 09:55:16.764986 kernel: Device empty Jul 8 09:55:16.764992 kernel: Movable zone start for each node Jul 8 09:55:16.764998 kernel: Early memory node ranges Jul 8 09:55:16.765005 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 8 09:55:16.765011 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 8 09:55:16.765017 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 8 09:55:16.765024 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 8 09:55:16.765030 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 8 09:55:16.765036 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 8 09:55:16.765042 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 8 09:55:16.765050 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 8 09:55:16.765056 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 8 09:55:16.765062 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 8 09:55:16.765071 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 8 09:55:16.765078 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 8 09:55:16.765085 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 8 09:55:16.765093 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 8 09:55:16.765100 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 8 09:55:16.765107 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jul 8 09:55:16.765113 kernel: psci: probing for conduit method from ACPI. Jul 8 09:55:16.765120 kernel: psci: PSCIv1.1 detected in firmware. Jul 8 09:55:16.765127 kernel: psci: Using standard PSCI v0.2 function IDs Jul 8 09:55:16.765133 kernel: psci: Trusted OS migration not required Jul 8 09:55:16.765140 kernel: psci: SMC Calling Convention v1.1 Jul 8 09:55:16.765147 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 8 09:55:16.765153 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 8 09:55:16.765162 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 8 09:55:16.765169 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 8 09:55:16.765175 kernel: Detected PIPT I-cache on CPU0 Jul 8 09:55:16.765182 kernel: CPU features: detected: GIC system register CPU interface Jul 8 09:55:16.765189 kernel: CPU features: detected: Spectre-v4 Jul 8 09:55:16.765195 kernel: CPU features: detected: Spectre-BHB Jul 8 09:55:16.765202 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 8 09:55:16.765209 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 8 09:55:16.765216 kernel: CPU features: detected: ARM erratum 1418040 Jul 8 09:55:16.765222 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 8 09:55:16.765229 kernel: alternatives: applying boot alternatives Jul 8 09:55:16.765237 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e45bccf50b3c8697dbe6c22614d97feceb95fd797a6c8fa74cac65f3c1164e8e Jul 8 09:55:16.765245 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 8 09:55:16.765252 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 8 09:55:16.765259 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 8 09:55:16.765265 kernel: Fallback order for Node 0: 0 Jul 8 09:55:16.765272 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 8 09:55:16.765279 kernel: Policy zone: DMA Jul 8 09:55:16.765285 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 8 09:55:16.765292 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 8 09:55:16.765299 kernel: software IO TLB: area num 4. Jul 8 09:55:16.765306 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 8 09:55:16.765312 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jul 8 09:55:16.765321 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 8 09:55:16.765327 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 8 09:55:16.765335 kernel: rcu: RCU event tracing is enabled. Jul 8 09:55:16.765342 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 8 09:55:16.765349 kernel: Trampoline variant of Tasks RCU enabled. Jul 8 09:55:16.765356 kernel: Tracing variant of Tasks RCU enabled. Jul 8 09:55:16.765362 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 8 09:55:16.765369 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 8 09:55:16.765376 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 8 09:55:16.765383 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 8 09:55:16.765390 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 8 09:55:16.765398 kernel: GICv3: 256 SPIs implemented Jul 8 09:55:16.765404 kernel: GICv3: 0 Extended SPIs implemented Jul 8 09:55:16.765411 kernel: Root IRQ handler: gic_handle_irq Jul 8 09:55:16.765418 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 8 09:55:16.765424 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 8 09:55:16.765431 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 8 09:55:16.765438 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 8 09:55:16.765444 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 8 09:55:16.765473 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 8 09:55:16.765480 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 8 09:55:16.765487 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 8 09:55:16.765494 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 8 09:55:16.765503 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 8 09:55:16.765510 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 8 09:55:16.765517 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 8 09:55:16.765524 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 8 09:55:16.765530 kernel: arm-pv: using stolen time PV Jul 8 09:55:16.765537 kernel: Console: colour dummy device 80x25 Jul 8 09:55:16.765544 kernel: ACPI: Core revision 20240827 Jul 8 09:55:16.765552 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 8 09:55:16.765559 kernel: pid_max: default: 32768 minimum: 301 Jul 8 09:55:16.765565 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 8 09:55:16.765574 kernel: landlock: Up and running. Jul 8 09:55:16.765581 kernel: SELinux: Initializing. Jul 8 09:55:16.765587 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 8 09:55:16.765594 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 8 09:55:16.765601 kernel: rcu: Hierarchical SRCU implementation. Jul 8 09:55:16.765608 kernel: rcu: Max phase no-delay instances is 400. Jul 8 09:55:16.765615 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 8 09:55:16.765622 kernel: Remapping and enabling EFI services. Jul 8 09:55:16.765629 kernel: smp: Bringing up secondary CPUs ... Jul 8 09:55:16.765642 kernel: Detected PIPT I-cache on CPU1 Jul 8 09:55:16.765650 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 8 09:55:16.765657 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 8 09:55:16.765666 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 8 09:55:16.765673 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 8 09:55:16.765680 kernel: Detected PIPT I-cache on CPU2 Jul 8 09:55:16.765688 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 8 09:55:16.765695 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 8 09:55:16.765704 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 8 09:55:16.765711 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 8 09:55:16.765718 kernel: Detected PIPT I-cache on CPU3 Jul 8 09:55:16.765726 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 8 09:55:16.765733 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 8 09:55:16.765740 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 8 09:55:16.765747 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 8 09:55:16.765755 kernel: smp: Brought up 1 node, 4 CPUs Jul 8 09:55:16.765762 kernel: SMP: Total of 4 processors activated. Jul 8 09:55:16.765771 kernel: CPU: All CPU(s) started at EL1 Jul 8 09:55:16.765778 kernel: CPU features: detected: 32-bit EL0 Support Jul 8 09:55:16.765785 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 8 09:55:16.765793 kernel: CPU features: detected: Common not Private translations Jul 8 09:55:16.765800 kernel: CPU features: detected: CRC32 instructions Jul 8 09:55:16.765807 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 8 09:55:16.765815 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 8 09:55:16.765828 kernel: CPU features: detected: LSE atomic instructions Jul 8 09:55:16.765836 kernel: CPU features: detected: Privileged Access Never Jul 8 09:55:16.765843 kernel: CPU features: detected: RAS Extension Support Jul 8 09:55:16.765852 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 8 09:55:16.765859 kernel: alternatives: applying system-wide alternatives Jul 8 09:55:16.765867 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 8 09:55:16.765875 kernel: Memory: 2424032K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 39424K init, 1038K bss, 125920K reserved, 16384K cma-reserved) Jul 8 09:55:16.765882 kernel: devtmpfs: initialized Jul 8 09:55:16.765890 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 8 09:55:16.765897 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 8 09:55:16.765905 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 8 09:55:16.765913 kernel: 0 pages in range for non-PLT usage Jul 8 09:55:16.765921 kernel: 508448 pages in range for PLT usage Jul 8 09:55:16.765928 kernel: pinctrl core: initialized pinctrl subsystem Jul 8 09:55:16.765935 kernel: SMBIOS 3.0.0 present. Jul 8 09:55:16.765942 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 8 09:55:16.765950 kernel: DMI: Memory slots populated: 1/1 Jul 8 09:55:16.765957 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 8 09:55:16.765964 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 8 09:55:16.765972 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 8 09:55:16.765980 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 8 09:55:16.765988 kernel: audit: initializing netlink subsys (disabled) Jul 8 09:55:16.765996 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Jul 8 09:55:16.766003 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 8 09:55:16.766010 kernel: cpuidle: using governor menu Jul 8 09:55:16.766018 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 8 09:55:16.766025 kernel: ASID allocator initialised with 32768 entries Jul 8 09:55:16.766032 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 8 09:55:16.766039 kernel: Serial: AMBA PL011 UART driver Jul 8 09:55:16.766047 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 8 09:55:16.766056 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 8 09:55:16.766063 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 8 09:55:16.766071 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 8 09:55:16.766078 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 8 09:55:16.766085 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 8 09:55:16.766093 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 8 09:55:16.766100 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 8 09:55:16.766107 kernel: ACPI: Added _OSI(Module Device) Jul 8 09:55:16.766115 kernel: ACPI: Added _OSI(Processor Device) Jul 8 09:55:16.766123 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 8 09:55:16.766131 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 8 09:55:16.766138 kernel: ACPI: Interpreter enabled Jul 8 09:55:16.766145 kernel: ACPI: Using GIC for interrupt routing Jul 8 09:55:16.766153 kernel: ACPI: MCFG table detected, 1 entries Jul 8 09:55:16.766160 kernel: ACPI: CPU0 has been hot-added Jul 8 09:55:16.766167 kernel: ACPI: CPU1 has been hot-added Jul 8 09:55:16.766174 kernel: ACPI: CPU2 has been hot-added Jul 8 09:55:16.766181 kernel: ACPI: CPU3 has been hot-added Jul 8 09:55:16.766190 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 8 09:55:16.766197 kernel: printk: legacy console [ttyAMA0] enabled Jul 8 09:55:16.766205 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 8 09:55:16.766331 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 8 09:55:16.766398 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 8 09:55:16.766526 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 8 09:55:16.766595 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 8 09:55:16.766664 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 8 09:55:16.766673 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 8 09:55:16.766681 kernel: PCI host bridge to bus 0000:00 Jul 8 09:55:16.766757 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 8 09:55:16.766815 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 8 09:55:16.766890 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 8 09:55:16.766946 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 8 09:55:16.767029 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 8 09:55:16.767106 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 8 09:55:16.767171 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 8 09:55:16.767235 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 8 09:55:16.767298 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 8 09:55:16.767361 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 8 09:55:16.767425 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 8 09:55:16.767507 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 8 09:55:16.767566 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 8 09:55:16.767622 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 8 09:55:16.767677 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 8 09:55:16.767687 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 8 09:55:16.767698 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 8 09:55:16.767707 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 8 09:55:16.767717 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 8 09:55:16.767724 kernel: iommu: Default domain type: Translated Jul 8 09:55:16.767731 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 8 09:55:16.767739 kernel: efivars: Registered efivars operations Jul 8 09:55:16.767746 kernel: vgaarb: loaded Jul 8 09:55:16.767753 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 8 09:55:16.767761 kernel: VFS: Disk quotas dquot_6.6.0 Jul 8 09:55:16.767768 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 8 09:55:16.767775 kernel: pnp: PnP ACPI init Jul 8 09:55:16.767856 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 8 09:55:16.767867 kernel: pnp: PnP ACPI: found 1 devices Jul 8 09:55:16.767875 kernel: NET: Registered PF_INET protocol family Jul 8 09:55:16.767882 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 8 09:55:16.767890 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 8 09:55:16.767897 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 8 09:55:16.767905 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 8 09:55:16.767912 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 8 09:55:16.767919 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 8 09:55:16.767929 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 8 09:55:16.767936 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 8 09:55:16.767944 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 8 09:55:16.767951 kernel: PCI: CLS 0 bytes, default 64 Jul 8 09:55:16.767958 kernel: kvm [1]: HYP mode not available Jul 8 09:55:16.767966 kernel: Initialise system trusted keyrings Jul 8 09:55:16.767973 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 8 09:55:16.767980 kernel: Key type asymmetric registered Jul 8 09:55:16.767987 kernel: Asymmetric key parser 'x509' registered Jul 8 09:55:16.767996 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 8 09:55:16.768003 kernel: io scheduler mq-deadline registered Jul 8 09:55:16.768011 kernel: io scheduler kyber registered Jul 8 09:55:16.768018 kernel: io scheduler bfq registered Jul 8 09:55:16.768025 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 8 09:55:16.768033 kernel: ACPI: button: Power Button [PWRB] Jul 8 09:55:16.768040 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 8 09:55:16.768103 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 8 09:55:16.768114 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 8 09:55:16.768123 kernel: thunder_xcv, ver 1.0 Jul 8 09:55:16.768130 kernel: thunder_bgx, ver 1.0 Jul 8 09:55:16.768137 kernel: nicpf, ver 1.0 Jul 8 09:55:16.768145 kernel: nicvf, ver 1.0 Jul 8 09:55:16.768216 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 8 09:55:16.768276 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-08T09:55:16 UTC (1751968516) Jul 8 09:55:16.768286 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 8 09:55:16.768294 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 8 09:55:16.768303 kernel: watchdog: NMI not fully supported Jul 8 09:55:16.768310 kernel: watchdog: Hard watchdog permanently disabled Jul 8 09:55:16.768318 kernel: NET: Registered PF_INET6 protocol family Jul 8 09:55:16.768325 kernel: Segment Routing with IPv6 Jul 8 09:55:16.768332 kernel: In-situ OAM (IOAM) with IPv6 Jul 8 09:55:16.768340 kernel: NET: Registered PF_PACKET protocol family Jul 8 09:55:16.768347 kernel: Key type dns_resolver registered Jul 8 09:55:16.768354 kernel: registered taskstats version 1 Jul 8 09:55:16.768362 kernel: Loading compiled-in X.509 certificates Jul 8 09:55:16.768370 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: b8c3c02197124f435a8672a3d63779285062e169' Jul 8 09:55:16.768378 kernel: Demotion targets for Node 0: null Jul 8 09:55:16.768385 kernel: Key type .fscrypt registered Jul 8 09:55:16.768392 kernel: Key type fscrypt-provisioning registered Jul 8 09:55:16.768400 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 8 09:55:16.768407 kernel: ima: Allocated hash algorithm: sha1 Jul 8 09:55:16.768414 kernel: ima: No architecture policies found Jul 8 09:55:16.768422 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 8 09:55:16.768429 kernel: clk: Disabling unused clocks Jul 8 09:55:16.768437 kernel: PM: genpd: Disabling unused power domains Jul 8 09:55:16.768445 kernel: Warning: unable to open an initial console. Jul 8 09:55:16.768462 kernel: Freeing unused kernel memory: 39424K Jul 8 09:55:16.768469 kernel: Run /init as init process Jul 8 09:55:16.768477 kernel: with arguments: Jul 8 09:55:16.768484 kernel: /init Jul 8 09:55:16.768491 kernel: with environment: Jul 8 09:55:16.768498 kernel: HOME=/ Jul 8 09:55:16.768505 kernel: TERM=linux Jul 8 09:55:16.768514 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 8 09:55:16.768522 systemd[1]: Successfully made /usr/ read-only. Jul 8 09:55:16.768532 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 8 09:55:16.768541 systemd[1]: Detected virtualization kvm. Jul 8 09:55:16.768549 systemd[1]: Detected architecture arm64. Jul 8 09:55:16.768556 systemd[1]: Running in initrd. Jul 8 09:55:16.768564 systemd[1]: No hostname configured, using default hostname. Jul 8 09:55:16.768573 systemd[1]: Hostname set to . Jul 8 09:55:16.768581 systemd[1]: Initializing machine ID from VM UUID. Jul 8 09:55:16.768589 systemd[1]: Queued start job for default target initrd.target. Jul 8 09:55:16.768597 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 8 09:55:16.768604 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 8 09:55:16.768613 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 8 09:55:16.768621 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 8 09:55:16.768629 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 8 09:55:16.768639 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 8 09:55:16.768648 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 8 09:55:16.768656 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 8 09:55:16.768664 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 8 09:55:16.768672 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 8 09:55:16.768680 systemd[1]: Reached target paths.target - Path Units. Jul 8 09:55:16.768688 systemd[1]: Reached target slices.target - Slice Units. Jul 8 09:55:16.768697 systemd[1]: Reached target swap.target - Swaps. Jul 8 09:55:16.768705 systemd[1]: Reached target timers.target - Timer Units. Jul 8 09:55:16.768713 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 8 09:55:16.768721 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 8 09:55:16.768729 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 8 09:55:16.768737 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 8 09:55:16.768745 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 8 09:55:16.768753 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 8 09:55:16.768762 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 8 09:55:16.768770 systemd[1]: Reached target sockets.target - Socket Units. Jul 8 09:55:16.768778 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 8 09:55:16.768786 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 8 09:55:16.768794 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 8 09:55:16.768802 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 8 09:55:16.768810 systemd[1]: Starting systemd-fsck-usr.service... Jul 8 09:55:16.768823 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 8 09:55:16.768832 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 8 09:55:16.768841 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 8 09:55:16.768849 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 8 09:55:16.768858 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 8 09:55:16.768866 systemd[1]: Finished systemd-fsck-usr.service. Jul 8 09:55:16.768875 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 8 09:55:16.768899 systemd-journald[244]: Collecting audit messages is disabled. Jul 8 09:55:16.768919 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 8 09:55:16.768928 systemd-journald[244]: Journal started Jul 8 09:55:16.768948 systemd-journald[244]: Runtime Journal (/run/log/journal/ea4f09599fd34ad98b4c24e9930576d4) is 6M, max 48.5M, 42.4M free. Jul 8 09:55:16.759523 systemd-modules-load[245]: Inserted module 'overlay' Jul 8 09:55:16.771929 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 8 09:55:16.771960 systemd[1]: Started systemd-journald.service - Journal Service. Jul 8 09:55:16.773010 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 8 09:55:16.775782 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 8 09:55:16.777447 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 8 09:55:16.777510 kernel: Bridge firewalling registered Jul 8 09:55:16.777898 systemd-modules-load[245]: Inserted module 'br_netfilter' Jul 8 09:55:16.778641 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 8 09:55:16.783664 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 8 09:55:16.785468 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 8 09:55:16.789891 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 8 09:55:16.791636 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 8 09:55:16.794700 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 8 09:55:16.795761 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 8 09:55:16.797808 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 8 09:55:16.800879 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 8 09:55:16.814945 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 8 09:55:16.828829 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e45bccf50b3c8697dbe6c22614d97feceb95fd797a6c8fa74cac65f3c1164e8e Jul 8 09:55:16.842800 systemd-resolved[286]: Positive Trust Anchors: Jul 8 09:55:16.842815 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 8 09:55:16.842853 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 8 09:55:16.847511 systemd-resolved[286]: Defaulting to hostname 'linux'. Jul 8 09:55:16.848722 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 8 09:55:16.851283 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 8 09:55:16.898473 kernel: SCSI subsystem initialized Jul 8 09:55:16.903470 kernel: Loading iSCSI transport class v2.0-870. Jul 8 09:55:16.910476 kernel: iscsi: registered transport (tcp) Jul 8 09:55:16.922467 kernel: iscsi: registered transport (qla4xxx) Jul 8 09:55:16.922486 kernel: QLogic iSCSI HBA Driver Jul 8 09:55:16.937962 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 8 09:55:16.957305 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 8 09:55:16.959097 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 8 09:55:17.000401 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 8 09:55:17.002373 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 8 09:55:17.065480 kernel: raid6: neonx8 gen() 15770 MB/s Jul 8 09:55:17.082468 kernel: raid6: neonx4 gen() 15786 MB/s Jul 8 09:55:17.099466 kernel: raid6: neonx2 gen() 13196 MB/s Jul 8 09:55:17.116462 kernel: raid6: neonx1 gen() 10438 MB/s Jul 8 09:55:17.133473 kernel: raid6: int64x8 gen() 6889 MB/s Jul 8 09:55:17.150474 kernel: raid6: int64x4 gen() 7341 MB/s Jul 8 09:55:17.167474 kernel: raid6: int64x2 gen() 6095 MB/s Jul 8 09:55:17.184465 kernel: raid6: int64x1 gen() 5041 MB/s Jul 8 09:55:17.184488 kernel: raid6: using algorithm neonx4 gen() 15786 MB/s Jul 8 09:55:17.201473 kernel: raid6: .... xor() 12337 MB/s, rmw enabled Jul 8 09:55:17.201486 kernel: raid6: using neon recovery algorithm Jul 8 09:55:17.206465 kernel: xor: measuring software checksum speed Jul 8 09:55:17.206483 kernel: 8regs : 20881 MB/sec Jul 8 09:55:17.206493 kernel: 32regs : 19650 MB/sec Jul 8 09:55:17.207751 kernel: arm64_neon : 28070 MB/sec Jul 8 09:55:17.207774 kernel: xor: using function: arm64_neon (28070 MB/sec) Jul 8 09:55:17.263482 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 8 09:55:17.270205 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 8 09:55:17.272415 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 8 09:55:17.297361 systemd-udevd[502]: Using default interface naming scheme 'v255'. Jul 8 09:55:17.301444 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 8 09:55:17.304385 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 8 09:55:17.327579 dracut-pre-trigger[512]: rd.md=0: removing MD RAID activation Jul 8 09:55:17.348983 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 8 09:55:17.351085 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 8 09:55:17.403489 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 8 09:55:17.405647 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 8 09:55:17.443781 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 8 09:55:17.444005 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 8 09:55:17.452473 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 8 09:55:17.452522 kernel: GPT:9289727 != 19775487 Jul 8 09:55:17.452535 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 8 09:55:17.452546 kernel: GPT:9289727 != 19775487 Jul 8 09:55:17.452557 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 8 09:55:17.452568 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 8 09:55:17.456466 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 8 09:55:17.457683 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 8 09:55:17.459649 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 8 09:55:17.461664 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 8 09:55:17.486402 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 8 09:55:17.487603 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 8 09:55:17.497408 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 8 09:55:17.500474 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 8 09:55:17.512628 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 8 09:55:17.518387 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 8 09:55:17.519311 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 8 09:55:17.521496 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 8 09:55:17.523239 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 8 09:55:17.524784 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 8 09:55:17.526952 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 8 09:55:17.528416 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 8 09:55:17.552986 disk-uuid[593]: Primary Header is updated. Jul 8 09:55:17.552986 disk-uuid[593]: Secondary Entries is updated. Jul 8 09:55:17.552986 disk-uuid[593]: Secondary Header is updated. Jul 8 09:55:17.556473 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 8 09:55:17.556526 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 8 09:55:18.568482 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 8 09:55:18.569030 disk-uuid[596]: The operation has completed successfully. Jul 8 09:55:18.592195 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 8 09:55:18.592284 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 8 09:55:18.617837 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 8 09:55:18.634298 sh[614]: Success Jul 8 09:55:18.649277 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 8 09:55:18.651640 kernel: device-mapper: uevent: version 1.0.3 Jul 8 09:55:18.653487 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 8 09:55:18.661472 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 8 09:55:18.685946 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 8 09:55:18.688025 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 8 09:55:18.709721 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 8 09:55:18.716156 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 8 09:55:18.716191 kernel: BTRFS: device fsid bdbdb169-3c1e-42f0-a497-cf03d2c2f17c devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (626) Jul 8 09:55:18.718627 kernel: BTRFS info (device dm-0): first mount of filesystem bdbdb169-3c1e-42f0-a497-cf03d2c2f17c Jul 8 09:55:18.718648 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 8 09:55:18.718658 kernel: BTRFS info (device dm-0): using free-space-tree Jul 8 09:55:18.721856 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 8 09:55:18.722837 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 8 09:55:18.723818 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 8 09:55:18.724517 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 8 09:55:18.726974 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 8 09:55:18.747464 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (655) Jul 8 09:55:18.749757 kernel: BTRFS info (device vda6): first mount of filesystem a41f3f29-7b0f-402e-8071-1d2630d50bc8 Jul 8 09:55:18.749817 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 8 09:55:18.749831 kernel: BTRFS info (device vda6): using free-space-tree Jul 8 09:55:18.755483 kernel: BTRFS info (device vda6): last unmount of filesystem a41f3f29-7b0f-402e-8071-1d2630d50bc8 Jul 8 09:55:18.756304 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 8 09:55:18.758296 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 8 09:55:18.825546 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 8 09:55:18.828550 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 8 09:55:18.876239 systemd-networkd[798]: lo: Link UP Jul 8 09:55:18.876256 systemd-networkd[798]: lo: Gained carrier Jul 8 09:55:18.877254 systemd-networkd[798]: Enumeration completed Jul 8 09:55:18.877691 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 8 09:55:18.877695 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 8 09:55:18.878799 systemd-networkd[798]: eth0: Link UP Jul 8 09:55:18.878803 systemd-networkd[798]: eth0: Gained carrier Jul 8 09:55:18.878824 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 8 09:55:18.878925 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 8 09:55:18.881064 systemd[1]: Reached target network.target - Network. Jul 8 09:55:18.900890 ignition[699]: Ignition 2.21.0 Jul 8 09:55:18.900914 ignition[699]: Stage: fetch-offline Jul 8 09:55:18.901034 ignition[699]: no configs at "/usr/lib/ignition/base.d" Jul 8 09:55:18.901045 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 8 09:55:18.905905 ignition[699]: parsed url from cmdline: "" Jul 8 09:55:18.905914 ignition[699]: no config URL provided Jul 8 09:55:18.905920 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Jul 8 09:55:18.905932 ignition[699]: no config at "/usr/lib/ignition/user.ign" Jul 8 09:55:18.906643 systemd-networkd[798]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 8 09:55:18.905953 ignition[699]: op(1): [started] loading QEMU firmware config module Jul 8 09:55:18.905978 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 8 09:55:18.913708 ignition[699]: op(1): [finished] loading QEMU firmware config module Jul 8 09:55:18.949538 ignition[699]: parsing config with SHA512: b4f8c01a292833d0b999e1c5522b3594662b4890f15af9ae81e586b5e3d5ef0a3443fbbeb4d4e60c00270e704b4c08bb470f0288474cd218d4505e5b2f9dccf4 Jul 8 09:55:18.953390 unknown[699]: fetched base config from "system" Jul 8 09:55:18.953402 unknown[699]: fetched user config from "qemu" Jul 8 09:55:18.953882 ignition[699]: fetch-offline: fetch-offline passed Jul 8 09:55:18.953942 ignition[699]: Ignition finished successfully Jul 8 09:55:18.955752 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 8 09:55:18.956934 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 8 09:55:18.957685 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 8 09:55:18.988279 ignition[815]: Ignition 2.21.0 Jul 8 09:55:18.988295 ignition[815]: Stage: kargs Jul 8 09:55:18.988477 ignition[815]: no configs at "/usr/lib/ignition/base.d" Jul 8 09:55:18.988490 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 8 09:55:18.989696 ignition[815]: kargs: kargs passed Jul 8 09:55:18.989749 ignition[815]: Ignition finished successfully Jul 8 09:55:18.993875 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 8 09:55:18.996492 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 8 09:55:19.020241 ignition[823]: Ignition 2.21.0 Jul 8 09:55:19.020260 ignition[823]: Stage: disks Jul 8 09:55:19.020400 ignition[823]: no configs at "/usr/lib/ignition/base.d" Jul 8 09:55:19.020409 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 8 09:55:19.021887 ignition[823]: disks: disks passed Jul 8 09:55:19.023598 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 8 09:55:19.021938 ignition[823]: Ignition finished successfully Jul 8 09:55:19.024497 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 8 09:55:19.025648 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 8 09:55:19.026888 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 8 09:55:19.028143 systemd[1]: Reached target sysinit.target - System Initialization. Jul 8 09:55:19.029497 systemd[1]: Reached target basic.target - Basic System. Jul 8 09:55:19.031540 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 8 09:55:19.061890 systemd-fsck[833]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 8 09:55:19.066122 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 8 09:55:19.067933 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 8 09:55:19.129479 kernel: EXT4-fs (vda9): mounted filesystem 0a07746c-74dd-4268-a51d-8a5a1fdc8a3a r/w with ordered data mode. Quota mode: none. Jul 8 09:55:19.130183 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 8 09:55:19.131193 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 8 09:55:19.133551 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 8 09:55:19.135348 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 8 09:55:19.136157 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 8 09:55:19.136194 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 8 09:55:19.136216 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 8 09:55:19.154985 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 8 09:55:19.156753 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 8 09:55:19.160687 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (842) Jul 8 09:55:19.160715 kernel: BTRFS info (device vda6): first mount of filesystem a41f3f29-7b0f-402e-8071-1d2630d50bc8 Jul 8 09:55:19.160725 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 8 09:55:19.161788 kernel: BTRFS info (device vda6): using free-space-tree Jul 8 09:55:19.164049 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 8 09:55:19.197427 initrd-setup-root[867]: cut: /sysroot/etc/passwd: No such file or directory Jul 8 09:55:19.201341 initrd-setup-root[874]: cut: /sysroot/etc/group: No such file or directory Jul 8 09:55:19.204795 initrd-setup-root[881]: cut: /sysroot/etc/shadow: No such file or directory Jul 8 09:55:19.208533 initrd-setup-root[888]: cut: /sysroot/etc/gshadow: No such file or directory Jul 8 09:55:19.281525 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 8 09:55:19.284569 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 8 09:55:19.286191 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 8 09:55:19.305488 kernel: BTRFS info (device vda6): last unmount of filesystem a41f3f29-7b0f-402e-8071-1d2630d50bc8 Jul 8 09:55:19.322186 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 8 09:55:19.325110 ignition[957]: INFO : Ignition 2.21.0 Jul 8 09:55:19.325110 ignition[957]: INFO : Stage: mount Jul 8 09:55:19.327331 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 8 09:55:19.327331 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 8 09:55:19.327331 ignition[957]: INFO : mount: mount passed Jul 8 09:55:19.327331 ignition[957]: INFO : Ignition finished successfully Jul 8 09:55:19.328015 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 8 09:55:19.329706 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 8 09:55:19.851706 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 8 09:55:19.853288 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 8 09:55:19.872919 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (969) Jul 8 09:55:19.872956 kernel: BTRFS info (device vda6): first mount of filesystem a41f3f29-7b0f-402e-8071-1d2630d50bc8 Jul 8 09:55:19.872967 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 8 09:55:19.874460 kernel: BTRFS info (device vda6): using free-space-tree Jul 8 09:55:19.876838 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 8 09:55:19.909624 ignition[986]: INFO : Ignition 2.21.0 Jul 8 09:55:19.909624 ignition[986]: INFO : Stage: files Jul 8 09:55:19.910963 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 8 09:55:19.910963 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 8 09:55:19.910963 ignition[986]: DEBUG : files: compiled without relabeling support, skipping Jul 8 09:55:19.913570 ignition[986]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 8 09:55:19.913570 ignition[986]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 8 09:55:19.915713 ignition[986]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 8 09:55:19.915713 ignition[986]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 8 09:55:19.915713 ignition[986]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 8 09:55:19.915218 unknown[986]: wrote ssh authorized keys file for user: core Jul 8 09:55:19.919924 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 8 09:55:19.919924 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 8 09:55:20.055324 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 8 09:55:20.251546 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 8 09:55:20.251546 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 8 09:55:20.254407 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 8 09:55:20.592873 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 8 09:55:20.653731 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 8 09:55:20.655075 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 8 09:55:20.655075 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 8 09:55:20.655075 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 8 09:55:20.655075 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 8 09:55:20.655075 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 8 09:55:20.655075 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 8 09:55:20.655075 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 8 09:55:20.655075 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 8 09:55:20.665064 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 8 09:55:20.665064 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 8 09:55:20.665064 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 8 09:55:20.665064 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 8 09:55:20.665064 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 8 09:55:20.665064 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 8 09:55:20.781586 systemd-networkd[798]: eth0: Gained IPv6LL Jul 8 09:55:21.071092 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 8 09:55:21.287858 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 8 09:55:21.287858 ignition[986]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 8 09:55:21.290749 ignition[986]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 8 09:55:21.290749 ignition[986]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 8 09:55:21.293648 ignition[986]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 8 09:55:21.293648 ignition[986]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 8 09:55:21.293648 ignition[986]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 8 09:55:21.293648 ignition[986]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 8 09:55:21.293648 ignition[986]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 8 09:55:21.293648 ignition[986]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 8 09:55:21.308854 ignition[986]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 8 09:55:21.311675 ignition[986]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 8 09:55:21.312776 ignition[986]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 8 09:55:21.312776 ignition[986]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 8 09:55:21.312776 ignition[986]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 8 09:55:21.312776 ignition[986]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 8 09:55:21.312776 ignition[986]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 8 09:55:21.312776 ignition[986]: INFO : files: files passed Jul 8 09:55:21.312776 ignition[986]: INFO : Ignition finished successfully Jul 8 09:55:21.313414 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 8 09:55:21.316181 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 8 09:55:21.319583 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 8 09:55:21.333897 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 8 09:55:21.333996 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 8 09:55:21.336433 initrd-setup-root-after-ignition[1015]: grep: /sysroot/oem/oem-release: No such file or directory Jul 8 09:55:21.337912 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 8 09:55:21.337912 initrd-setup-root-after-ignition[1017]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 8 09:55:21.340748 initrd-setup-root-after-ignition[1021]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 8 09:55:21.341559 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 8 09:55:21.343234 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 8 09:55:21.346185 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 8 09:55:21.379045 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 8 09:55:21.379147 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 8 09:55:21.380797 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 8 09:55:21.382167 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 8 09:55:21.383532 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 8 09:55:21.384244 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 8 09:55:21.407485 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 8 09:55:21.409640 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 8 09:55:21.434048 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 8 09:55:21.434966 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 8 09:55:21.436389 systemd[1]: Stopped target timers.target - Timer Units. Jul 8 09:55:21.437770 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 8 09:55:21.437902 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 8 09:55:21.439841 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 8 09:55:21.441232 systemd[1]: Stopped target basic.target - Basic System. Jul 8 09:55:21.442385 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 8 09:55:21.443608 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 8 09:55:21.444980 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 8 09:55:21.446332 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 8 09:55:21.447747 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 8 09:55:21.449085 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 8 09:55:21.450620 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 8 09:55:21.452166 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 8 09:55:21.453434 systemd[1]: Stopped target swap.target - Swaps. Jul 8 09:55:21.454625 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 8 09:55:21.454736 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 8 09:55:21.456502 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 8 09:55:21.457922 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 8 09:55:21.459398 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 8 09:55:21.461084 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 8 09:55:21.462140 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 8 09:55:21.462259 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 8 09:55:21.464569 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 8 09:55:21.464686 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 8 09:55:21.466209 systemd[1]: Stopped target paths.target - Path Units. Jul 8 09:55:21.467407 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 8 09:55:21.471542 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 8 09:55:21.472576 systemd[1]: Stopped target slices.target - Slice Units. Jul 8 09:55:21.474153 systemd[1]: Stopped target sockets.target - Socket Units. Jul 8 09:55:21.475308 systemd[1]: iscsid.socket: Deactivated successfully. Jul 8 09:55:21.475393 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 8 09:55:21.476486 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 8 09:55:21.476565 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 8 09:55:21.477648 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 8 09:55:21.477762 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 8 09:55:21.479003 systemd[1]: ignition-files.service: Deactivated successfully. Jul 8 09:55:21.479101 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 8 09:55:21.481146 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 8 09:55:21.482322 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 8 09:55:21.482445 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 8 09:55:21.484837 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 8 09:55:21.485890 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 8 09:55:21.486014 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 8 09:55:21.487499 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 8 09:55:21.487599 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 8 09:55:21.493875 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 8 09:55:21.494003 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 8 09:55:21.500810 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 8 09:55:21.503923 ignition[1041]: INFO : Ignition 2.21.0 Jul 8 09:55:21.503923 ignition[1041]: INFO : Stage: umount Jul 8 09:55:21.506232 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 8 09:55:21.506232 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 8 09:55:21.506232 ignition[1041]: INFO : umount: umount passed Jul 8 09:55:21.506232 ignition[1041]: INFO : Ignition finished successfully Jul 8 09:55:21.508043 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 8 09:55:21.508152 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 8 09:55:21.509125 systemd[1]: Stopped target network.target - Network. Jul 8 09:55:21.510207 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 8 09:55:21.510372 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 8 09:55:21.511526 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 8 09:55:21.511571 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 8 09:55:21.512919 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 8 09:55:21.512968 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 8 09:55:21.514239 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 8 09:55:21.514277 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 8 09:55:21.515757 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 8 09:55:21.517228 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 8 09:55:21.526517 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 8 09:55:21.526648 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 8 09:55:21.530326 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 8 09:55:21.530652 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 8 09:55:21.530690 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 8 09:55:21.533659 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 8 09:55:21.536694 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 8 09:55:21.536825 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 8 09:55:21.539602 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 8 09:55:21.539827 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 8 09:55:21.541575 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 8 09:55:21.541606 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 8 09:55:21.544075 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 8 09:55:21.544736 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 8 09:55:21.544788 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 8 09:55:21.545712 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 8 09:55:21.545749 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 8 09:55:21.547848 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 8 09:55:21.547889 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 8 09:55:21.549464 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 8 09:55:21.552168 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 8 09:55:21.566739 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 8 09:55:21.571554 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 8 09:55:21.572478 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 8 09:55:21.572600 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 8 09:55:21.573940 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 8 09:55:21.574010 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 8 09:55:21.575989 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 8 09:55:21.576031 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 8 09:55:21.577364 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 8 09:55:21.577394 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 8 09:55:21.578730 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 8 09:55:21.578770 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 8 09:55:21.580821 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 8 09:55:21.580866 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 8 09:55:21.582818 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 8 09:55:21.582866 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 8 09:55:21.584869 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 8 09:55:21.584911 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 8 09:55:21.586858 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 8 09:55:21.588159 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 8 09:55:21.588204 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 8 09:55:21.590471 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 8 09:55:21.590511 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 8 09:55:21.592723 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 8 09:55:21.592759 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 8 09:55:21.604383 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 8 09:55:21.604487 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 8 09:55:21.606565 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 8 09:55:21.608685 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 8 09:55:21.640081 systemd[1]: Switching root. Jul 8 09:55:21.674352 systemd-journald[244]: Journal stopped Jul 8 09:55:22.423839 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jul 8 09:55:22.423894 kernel: SELinux: policy capability network_peer_controls=1 Jul 8 09:55:22.423906 kernel: SELinux: policy capability open_perms=1 Jul 8 09:55:22.423922 kernel: SELinux: policy capability extended_socket_class=1 Jul 8 09:55:22.423935 kernel: SELinux: policy capability always_check_network=0 Jul 8 09:55:22.423948 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 8 09:55:22.423960 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 8 09:55:22.423970 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 8 09:55:22.423979 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 8 09:55:22.423989 kernel: SELinux: policy capability userspace_initial_context=0 Jul 8 09:55:22.423999 systemd[1]: Successfully loaded SELinux policy in 60.010ms. Jul 8 09:55:22.424015 kernel: audit: type=1403 audit(1751968521.865:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 8 09:55:22.424033 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.417ms. Jul 8 09:55:22.424045 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 8 09:55:22.424057 systemd[1]: Detected virtualization kvm. Jul 8 09:55:22.424067 systemd[1]: Detected architecture arm64. Jul 8 09:55:22.424078 systemd[1]: Detected first boot. Jul 8 09:55:22.424090 systemd[1]: Initializing machine ID from VM UUID. Jul 8 09:55:22.424100 zram_generator::config[1088]: No configuration found. Jul 8 09:55:22.424112 kernel: NET: Registered PF_VSOCK protocol family Jul 8 09:55:22.424122 systemd[1]: Populated /etc with preset unit settings. Jul 8 09:55:22.424133 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 8 09:55:22.424144 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 8 09:55:22.424159 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 8 09:55:22.424170 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 8 09:55:22.424181 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 8 09:55:22.424192 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 8 09:55:22.424202 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 8 09:55:22.424213 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 8 09:55:22.424223 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 8 09:55:22.424234 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 8 09:55:22.424246 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 8 09:55:22.424257 systemd[1]: Created slice user.slice - User and Session Slice. Jul 8 09:55:22.424267 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 8 09:55:22.424279 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 8 09:55:22.424290 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 8 09:55:22.424301 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 8 09:55:22.424312 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 8 09:55:22.424323 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 8 09:55:22.424334 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 8 09:55:22.424346 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 8 09:55:22.424356 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 8 09:55:22.424367 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 8 09:55:22.424378 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 8 09:55:22.424388 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 8 09:55:22.424399 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 8 09:55:22.424411 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 8 09:55:22.424422 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 8 09:55:22.424434 systemd[1]: Reached target slices.target - Slice Units. Jul 8 09:55:22.424445 systemd[1]: Reached target swap.target - Swaps. Jul 8 09:55:22.424464 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 8 09:55:22.424476 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 8 09:55:22.424488 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 8 09:55:22.424498 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 8 09:55:22.424508 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 8 09:55:22.424519 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 8 09:55:22.424530 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 8 09:55:22.424542 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 8 09:55:22.424553 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 8 09:55:22.424564 systemd[1]: Mounting media.mount - External Media Directory... Jul 8 09:55:22.424574 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 8 09:55:22.424585 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 8 09:55:22.424596 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 8 09:55:22.424607 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 8 09:55:22.424618 systemd[1]: Reached target machines.target - Containers. Jul 8 09:55:22.424628 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 8 09:55:22.424640 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 8 09:55:22.424651 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 8 09:55:22.424661 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 8 09:55:22.424672 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 8 09:55:22.424682 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 8 09:55:22.424693 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 8 09:55:22.424704 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 8 09:55:22.424715 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 8 09:55:22.424727 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 8 09:55:22.424737 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 8 09:55:22.424748 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 8 09:55:22.424758 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 8 09:55:22.424769 systemd[1]: Stopped systemd-fsck-usr.service. Jul 8 09:55:22.424779 kernel: fuse: init (API version 7.41) Jul 8 09:55:22.424789 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 8 09:55:22.424805 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 8 09:55:22.424818 kernel: loop: module loaded Jul 8 09:55:22.424831 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 8 09:55:22.424842 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 8 09:55:22.424853 kernel: ACPI: bus type drm_connector registered Jul 8 09:55:22.424863 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 8 09:55:22.424874 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 8 09:55:22.424885 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 8 09:55:22.424897 systemd[1]: verity-setup.service: Deactivated successfully. Jul 8 09:55:22.424907 systemd[1]: Stopped verity-setup.service. Jul 8 09:55:22.424918 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 8 09:55:22.424928 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 8 09:55:22.424939 systemd[1]: Mounted media.mount - External Media Directory. Jul 8 09:55:22.424949 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 8 09:55:22.424959 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 8 09:55:22.424993 systemd-journald[1160]: Collecting audit messages is disabled. Jul 8 09:55:22.425014 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 8 09:55:22.425025 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 8 09:55:22.425035 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 8 09:55:22.425048 systemd-journald[1160]: Journal started Jul 8 09:55:22.425070 systemd-journald[1160]: Runtime Journal (/run/log/journal/ea4f09599fd34ad98b4c24e9930576d4) is 6M, max 48.5M, 42.4M free. Jul 8 09:55:22.233534 systemd[1]: Queued start job for default target multi-user.target. Jul 8 09:55:22.253393 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 8 09:55:22.253765 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 8 09:55:22.427824 systemd[1]: Started systemd-journald.service - Journal Service. Jul 8 09:55:22.428509 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 8 09:55:22.428675 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 8 09:55:22.429809 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 8 09:55:22.429976 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 8 09:55:22.430999 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 8 09:55:22.431138 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 8 09:55:22.432121 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 8 09:55:22.432262 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 8 09:55:22.433352 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 8 09:55:22.433510 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 8 09:55:22.434523 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 8 09:55:22.434687 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 8 09:55:22.435768 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 8 09:55:22.436867 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 8 09:55:22.437986 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 8 09:55:22.439125 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 8 09:55:22.450117 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 8 09:55:22.452019 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 8 09:55:22.453709 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 8 09:55:22.454560 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 8 09:55:22.454584 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 8 09:55:22.456076 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 8 09:55:22.467296 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 8 09:55:22.468192 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 8 09:55:22.469470 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 8 09:55:22.471199 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 8 09:55:22.472226 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 8 09:55:22.473572 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 8 09:55:22.474381 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 8 09:55:22.475344 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 8 09:55:22.478681 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 8 09:55:22.480630 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 8 09:55:22.482756 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 8 09:55:22.483221 systemd-journald[1160]: Time spent on flushing to /var/log/journal/ea4f09599fd34ad98b4c24e9930576d4 is 20.879ms for 887 entries. Jul 8 09:55:22.483221 systemd-journald[1160]: System Journal (/var/log/journal/ea4f09599fd34ad98b4c24e9930576d4) is 8M, max 195.6M, 187.6M free. Jul 8 09:55:22.523778 systemd-journald[1160]: Received client request to flush runtime journal. Jul 8 09:55:22.523844 kernel: loop0: detected capacity change from 0 to 211168 Jul 8 09:55:22.523873 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 8 09:55:22.484637 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 8 09:55:22.486438 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 8 09:55:22.492576 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 8 09:55:22.493764 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 8 09:55:22.495589 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 8 09:55:22.516558 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 8 09:55:22.525567 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 8 09:55:22.528526 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 8 09:55:22.532144 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 8 09:55:22.534206 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 8 09:55:22.541547 kernel: loop1: detected capacity change from 0 to 105936 Jul 8 09:55:22.553700 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Jul 8 09:55:22.553718 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Jul 8 09:55:22.557529 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 8 09:55:22.560495 kernel: loop2: detected capacity change from 0 to 134232 Jul 8 09:55:22.587476 kernel: loop3: detected capacity change from 0 to 211168 Jul 8 09:55:22.593511 kernel: loop4: detected capacity change from 0 to 105936 Jul 8 09:55:22.598470 kernel: loop5: detected capacity change from 0 to 134232 Jul 8 09:55:22.603355 (sd-merge)[1228]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 8 09:55:22.604023 (sd-merge)[1228]: Merged extensions into '/usr'. Jul 8 09:55:22.610168 systemd[1]: Reload requested from client PID 1204 ('systemd-sysext') (unit systemd-sysext.service)... Jul 8 09:55:22.610190 systemd[1]: Reloading... Jul 8 09:55:22.672479 zram_generator::config[1253]: No configuration found. Jul 8 09:55:22.741010 ldconfig[1199]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 8 09:55:22.742261 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 8 09:55:22.804245 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 8 09:55:22.804524 systemd[1]: Reloading finished in 193 ms. Jul 8 09:55:22.827152 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 8 09:55:22.829480 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 8 09:55:22.844742 systemd[1]: Starting ensure-sysext.service... Jul 8 09:55:22.846325 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 8 09:55:22.856716 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... Jul 8 09:55:22.856732 systemd[1]: Reloading... Jul 8 09:55:22.863008 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 8 09:55:22.863040 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 8 09:55:22.863292 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 8 09:55:22.863532 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 8 09:55:22.864154 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 8 09:55:22.864353 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Jul 8 09:55:22.864401 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Jul 8 09:55:22.866933 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Jul 8 09:55:22.866946 systemd-tmpfiles[1289]: Skipping /boot Jul 8 09:55:22.872733 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Jul 8 09:55:22.872749 systemd-tmpfiles[1289]: Skipping /boot Jul 8 09:55:22.911490 zram_generator::config[1316]: No configuration found. Jul 8 09:55:22.975463 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 8 09:55:23.037148 systemd[1]: Reloading finished in 180 ms. Jul 8 09:55:23.048938 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 8 09:55:23.062594 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 8 09:55:23.071118 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 8 09:55:23.073186 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 8 09:55:23.088034 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 8 09:55:23.090821 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 8 09:55:23.094845 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 8 09:55:23.096674 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 8 09:55:23.101459 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 8 09:55:23.106018 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 8 09:55:23.109316 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 8 09:55:23.111213 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 8 09:55:23.112638 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 8 09:55:23.112744 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 8 09:55:23.122427 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 8 09:55:23.124152 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 8 09:55:23.124301 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 8 09:55:23.127150 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 8 09:55:23.128518 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 8 09:55:23.130297 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 8 09:55:23.131830 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 8 09:55:23.131996 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 8 09:55:23.138130 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 8 09:55:23.139898 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 8 09:55:23.140965 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 8 09:55:23.145046 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Jul 8 09:55:23.146641 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 8 09:55:23.151046 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 8 09:55:23.152228 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 8 09:55:23.152468 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 8 09:55:23.154094 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 8 09:55:23.156410 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 8 09:55:23.157520 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 8 09:55:23.165680 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 8 09:55:23.167096 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 8 09:55:23.168733 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 8 09:55:23.170578 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 8 09:55:23.171982 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 8 09:55:23.173040 augenrules[1396]: No rules Jul 8 09:55:23.173687 systemd[1]: audit-rules.service: Deactivated successfully. Jul 8 09:55:23.173887 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 8 09:55:23.175114 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 8 09:55:23.175256 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 8 09:55:23.178129 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 8 09:55:23.185394 systemd[1]: Finished ensure-sysext.service. Jul 8 09:55:23.191132 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 8 09:55:23.192134 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 8 09:55:23.193882 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 8 09:55:23.194725 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 8 09:55:23.194768 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 8 09:55:23.196228 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 8 09:55:23.197027 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 8 09:55:23.198837 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 8 09:55:23.199627 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 8 09:55:23.210555 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 8 09:55:23.210747 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 8 09:55:23.213146 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 8 09:55:23.213321 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 8 09:55:23.214686 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 8 09:55:23.231126 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 8 09:55:23.283745 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 8 09:55:23.286851 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 8 09:55:23.322031 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 8 09:55:23.336116 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 8 09:55:23.337152 systemd-networkd[1436]: lo: Link UP Jul 8 09:55:23.337166 systemd-networkd[1436]: lo: Gained carrier Jul 8 09:55:23.337702 systemd[1]: Reached target time-set.target - System Time Set. Jul 8 09:55:23.337957 systemd-networkd[1436]: Enumeration completed Jul 8 09:55:23.339035 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 8 09:55:23.340641 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 8 09:55:23.340649 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 8 09:55:23.341165 systemd-networkd[1436]: eth0: Link UP Jul 8 09:55:23.341275 systemd-networkd[1436]: eth0: Gained carrier Jul 8 09:55:23.341292 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 8 09:55:23.342362 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 8 09:55:23.343376 systemd-resolved[1355]: Positive Trust Anchors: Jul 8 09:55:23.343399 systemd-resolved[1355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 8 09:55:23.343432 systemd-resolved[1355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 8 09:55:23.344069 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 8 09:55:23.351246 systemd-resolved[1355]: Defaulting to hostname 'linux'. Jul 8 09:55:23.355835 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 8 09:55:23.356679 systemd[1]: Reached target network.target - Network. Jul 8 09:55:23.357310 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 8 09:55:23.358181 systemd[1]: Reached target sysinit.target - System Initialization. Jul 8 09:55:23.359002 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 8 09:55:23.359868 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 8 09:55:23.360925 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 8 09:55:23.361758 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 8 09:55:23.363111 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 8 09:55:23.363561 systemd-networkd[1436]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 8 09:55:23.364042 systemd-timesyncd[1437]: Network configuration changed, trying to establish connection. Jul 8 09:55:23.364242 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 8 09:55:23.364277 systemd[1]: Reached target paths.target - Path Units. Jul 8 09:55:22.878663 systemd-resolved[1355]: Clock change detected. Flushing caches. Jul 8 09:55:22.882402 systemd-journald[1160]: Time jumped backwards, rotating. Jul 8 09:55:22.879330 systemd[1]: Reached target timers.target - Timer Units. Jul 8 09:55:22.879445 systemd-timesyncd[1437]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 8 09:55:22.880000 systemd-timesyncd[1437]: Initial clock synchronization to Tue 2025-07-08 09:55:22.878627 UTC. Jul 8 09:55:22.881713 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 8 09:55:22.902733 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 8 09:55:22.905703 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 8 09:55:22.906810 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 8 09:55:22.909269 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 8 09:55:22.915230 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 8 09:55:22.916226 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 8 09:55:22.917677 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 8 09:55:22.924555 systemd[1]: Reached target sockets.target - Socket Units. Jul 8 09:55:22.925283 systemd[1]: Reached target basic.target - Basic System. Jul 8 09:55:22.925986 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 8 09:55:22.926013 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 8 09:55:22.926861 systemd[1]: Starting containerd.service - containerd container runtime... Jul 8 09:55:22.928451 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 8 09:55:22.929972 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 8 09:55:22.931641 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 8 09:55:22.933197 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 8 09:55:22.933901 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 8 09:55:22.936274 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 8 09:55:22.937883 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 8 09:55:22.941245 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 8 09:55:22.942045 jq[1475]: false Jul 8 09:55:22.942885 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 8 09:55:22.945985 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 8 09:55:22.949148 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 8 09:55:22.950813 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 8 09:55:22.951314 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 8 09:55:22.953526 systemd[1]: Starting update-engine.service - Update Engine... Jul 8 09:55:22.957280 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 8 09:55:22.957545 extend-filesystems[1476]: Found /dev/vda6 Jul 8 09:55:22.958913 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 8 09:55:22.962185 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 8 09:55:22.963716 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 8 09:55:22.963863 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 8 09:55:22.964469 jq[1490]: true Jul 8 09:55:22.967670 systemd[1]: motdgen.service: Deactivated successfully. Jul 8 09:55:22.968252 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 8 09:55:22.973764 extend-filesystems[1476]: Found /dev/vda9 Jul 8 09:55:22.975246 extend-filesystems[1476]: Checking size of /dev/vda9 Jul 8 09:55:22.978295 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 8 09:55:22.984042 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 8 09:55:22.986476 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 8 09:55:22.992692 jq[1499]: true Jul 8 09:55:23.009936 extend-filesystems[1476]: Resized partition /dev/vda9 Jul 8 09:55:23.015322 extend-filesystems[1521]: resize2fs 1.47.2 (1-Jan-2025) Jul 8 09:55:23.018178 tar[1498]: linux-arm64/LICENSE Jul 8 09:55:23.018178 tar[1498]: linux-arm64/helm Jul 8 09:55:23.021763 update_engine[1489]: I20250708 09:55:23.019884 1489 main.cc:92] Flatcar Update Engine starting Jul 8 09:55:23.026173 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 8 09:55:23.046392 dbus-daemon[1473]: [system] SELinux support is enabled Jul 8 09:55:23.047431 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 8 09:55:23.052578 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 8 09:55:23.055655 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 8 09:55:23.056214 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 8 09:55:23.057321 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 8 09:55:23.057339 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 8 09:55:23.063179 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 8 09:55:23.071614 update_engine[1489]: I20250708 09:55:23.069310 1489 update_check_scheduler.cc:74] Next update check in 3m58s Jul 8 09:55:23.069591 systemd[1]: Started update-engine.service - Update Engine. Jul 8 09:55:23.072305 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 8 09:55:23.076120 extend-filesystems[1521]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 8 09:55:23.076120 extend-filesystems[1521]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 8 09:55:23.076120 extend-filesystems[1521]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 8 09:55:23.083743 extend-filesystems[1476]: Resized filesystem in /dev/vda9 Jul 8 09:55:23.077312 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 8 09:55:23.077586 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 8 09:55:23.089464 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Jul 8 09:55:23.093714 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 8 09:55:23.095999 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 8 09:55:23.097021 systemd-logind[1484]: Watching system buttons on /dev/input/event0 (Power Button) Jul 8 09:55:23.098611 systemd-logind[1484]: New seat seat0. Jul 8 09:55:23.107927 systemd[1]: Started systemd-logind.service - User Login Management. Jul 8 09:55:23.142294 locksmithd[1541]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 8 09:55:23.145869 sshd_keygen[1506]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 8 09:55:23.164817 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 8 09:55:23.169404 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 8 09:55:23.188533 systemd[1]: issuegen.service: Deactivated successfully. Jul 8 09:55:23.188708 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 8 09:55:23.190859 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 8 09:55:23.213483 containerd[1500]: time="2025-07-08T09:55:23Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 8 09:55:23.214770 containerd[1500]: time="2025-07-08T09:55:23.214720145Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 8 09:55:23.217633 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 8 09:55:23.220455 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 8 09:55:23.222091 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 8 09:55:23.223237 systemd[1]: Reached target getty.target - Login Prompts. Jul 8 09:55:23.225074 containerd[1500]: time="2025-07-08T09:55:23.224997305Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.16µs" Jul 8 09:55:23.225074 containerd[1500]: time="2025-07-08T09:55:23.225050385Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 8 09:55:23.225074 containerd[1500]: time="2025-07-08T09:55:23.225069225Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 8 09:55:23.225244 containerd[1500]: time="2025-07-08T09:55:23.225213305Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 8 09:55:23.225244 containerd[1500]: time="2025-07-08T09:55:23.225228945Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 8 09:55:23.225277 containerd[1500]: time="2025-07-08T09:55:23.225249745Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 8 09:55:23.225809 containerd[1500]: time="2025-07-08T09:55:23.225299105Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 8 09:55:23.225809 containerd[1500]: time="2025-07-08T09:55:23.225315505Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 8 09:55:23.225809 containerd[1500]: time="2025-07-08T09:55:23.225612785Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 8 09:55:23.225809 containerd[1500]: time="2025-07-08T09:55:23.225631825Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 8 09:55:23.225809 containerd[1500]: time="2025-07-08T09:55:23.225643225Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 8 09:55:23.225809 containerd[1500]: time="2025-07-08T09:55:23.225652585Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 8 09:55:23.225929 containerd[1500]: time="2025-07-08T09:55:23.225840105Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 8 09:55:23.226066 containerd[1500]: time="2025-07-08T09:55:23.226022305Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 8 09:55:23.226096 containerd[1500]: time="2025-07-08T09:55:23.226062625Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 8 09:55:23.226096 containerd[1500]: time="2025-07-08T09:55:23.226075065Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 8 09:55:23.226137 containerd[1500]: time="2025-07-08T09:55:23.226117265Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 8 09:55:23.226436 containerd[1500]: time="2025-07-08T09:55:23.226399905Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 8 09:55:23.226519 containerd[1500]: time="2025-07-08T09:55:23.226484385Z" level=info msg="metadata content store policy set" policy=shared Jul 8 09:55:23.230125 containerd[1500]: time="2025-07-08T09:55:23.230096105Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 8 09:55:23.230188 containerd[1500]: time="2025-07-08T09:55:23.230148745Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 8 09:55:23.230188 containerd[1500]: time="2025-07-08T09:55:23.230177905Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 8 09:55:23.230238 containerd[1500]: time="2025-07-08T09:55:23.230193505Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 8 09:55:23.230238 containerd[1500]: time="2025-07-08T09:55:23.230205585Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 8 09:55:23.230238 containerd[1500]: time="2025-07-08T09:55:23.230218865Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 8 09:55:23.230238 containerd[1500]: time="2025-07-08T09:55:23.230233665Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 8 09:55:23.230300 containerd[1500]: time="2025-07-08T09:55:23.230245305Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 8 09:55:23.230300 containerd[1500]: time="2025-07-08T09:55:23.230256505Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 8 09:55:23.230300 containerd[1500]: time="2025-07-08T09:55:23.230266425Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 8 09:55:23.230300 containerd[1500]: time="2025-07-08T09:55:23.230275425Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 8 09:55:23.230300 containerd[1500]: time="2025-07-08T09:55:23.230287025Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 8 09:55:23.230416 containerd[1500]: time="2025-07-08T09:55:23.230393185Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 8 09:55:23.230441 containerd[1500]: time="2025-07-08T09:55:23.230422345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 8 09:55:23.230503 containerd[1500]: time="2025-07-08T09:55:23.230440545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 8 09:55:23.230503 containerd[1500]: time="2025-07-08T09:55:23.230452185Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 8 09:55:23.230545 containerd[1500]: time="2025-07-08T09:55:23.230502825Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 8 09:55:23.230545 containerd[1500]: time="2025-07-08T09:55:23.230518065Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 8 09:55:23.230545 containerd[1500]: time="2025-07-08T09:55:23.230529865Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 8 09:55:23.230545 containerd[1500]: time="2025-07-08T09:55:23.230539865Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 8 09:55:23.230616 containerd[1500]: time="2025-07-08T09:55:23.230551425Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 8 09:55:23.230616 containerd[1500]: time="2025-07-08T09:55:23.230562385Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 8 09:55:23.230616 containerd[1500]: time="2025-07-08T09:55:23.230572665Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 8 09:55:23.230768 containerd[1500]: time="2025-07-08T09:55:23.230749985Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 8 09:55:23.230794 containerd[1500]: time="2025-07-08T09:55:23.230775625Z" level=info msg="Start snapshots syncer" Jul 8 09:55:23.230812 containerd[1500]: time="2025-07-08T09:55:23.230806665Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 8 09:55:23.231190 containerd[1500]: time="2025-07-08T09:55:23.231114465Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 8 09:55:23.231275 containerd[1500]: time="2025-07-08T09:55:23.231208425Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 8 09:55:23.231895 containerd[1500]: time="2025-07-08T09:55:23.231853185Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 8 09:55:23.232041 containerd[1500]: time="2025-07-08T09:55:23.232019985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 8 09:55:23.232070 containerd[1500]: time="2025-07-08T09:55:23.232056225Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 8 09:55:23.232088 containerd[1500]: time="2025-07-08T09:55:23.232069025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 8 09:55:23.232088 containerd[1500]: time="2025-07-08T09:55:23.232082945Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 8 09:55:23.232122 containerd[1500]: time="2025-07-08T09:55:23.232095065Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 8 09:55:23.232122 containerd[1500]: time="2025-07-08T09:55:23.232106225Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 8 09:55:23.232122 containerd[1500]: time="2025-07-08T09:55:23.232116545Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 8 09:55:23.232232 containerd[1500]: time="2025-07-08T09:55:23.232140545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 8 09:55:23.232232 containerd[1500]: time="2025-07-08T09:55:23.232165785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 8 09:55:23.232232 containerd[1500]: time="2025-07-08T09:55:23.232178105Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 8 09:55:23.232889 containerd[1500]: time="2025-07-08T09:55:23.232857305Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 8 09:55:23.232925 containerd[1500]: time="2025-07-08T09:55:23.232890465Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 8 09:55:23.232925 containerd[1500]: time="2025-07-08T09:55:23.232902145Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 8 09:55:23.232925 containerd[1500]: time="2025-07-08T09:55:23.232912345Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 8 09:55:23.232925 containerd[1500]: time="2025-07-08T09:55:23.232919865Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 8 09:55:23.233001 containerd[1500]: time="2025-07-08T09:55:23.232935625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 8 09:55:23.233001 containerd[1500]: time="2025-07-08T09:55:23.232946985Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 8 09:55:23.233034 containerd[1500]: time="2025-07-08T09:55:23.233021345Z" level=info msg="runtime interface created" Jul 8 09:55:23.233034 containerd[1500]: time="2025-07-08T09:55:23.233027145Z" level=info msg="created NRI interface" Jul 8 09:55:23.233066 containerd[1500]: time="2025-07-08T09:55:23.233035465Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 8 09:55:23.233066 containerd[1500]: time="2025-07-08T09:55:23.233046785Z" level=info msg="Connect containerd service" Jul 8 09:55:23.233101 containerd[1500]: time="2025-07-08T09:55:23.233078825Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 8 09:55:23.234066 containerd[1500]: time="2025-07-08T09:55:23.234028065Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 8 09:55:23.329481 containerd[1500]: time="2025-07-08T09:55:23.329397945Z" level=info msg="Start subscribing containerd event" Jul 8 09:55:23.329481 containerd[1500]: time="2025-07-08T09:55:23.329465745Z" level=info msg="Start recovering state" Jul 8 09:55:23.329597 containerd[1500]: time="2025-07-08T09:55:23.329558985Z" level=info msg="Start event monitor" Jul 8 09:55:23.329597 containerd[1500]: time="2025-07-08T09:55:23.329573465Z" level=info msg="Start cni network conf syncer for default" Jul 8 09:55:23.329597 containerd[1500]: time="2025-07-08T09:55:23.329581945Z" level=info msg="Start streaming server" Jul 8 09:55:23.329721 containerd[1500]: time="2025-07-08T09:55:23.329691265Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 8 09:55:23.329764 containerd[1500]: time="2025-07-08T09:55:23.329749065Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 8 09:55:23.332044 containerd[1500]: time="2025-07-08T09:55:23.332021185Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 8 09:55:23.332193 containerd[1500]: time="2025-07-08T09:55:23.332089225Z" level=info msg="runtime interface starting up..." Jul 8 09:55:23.332193 containerd[1500]: time="2025-07-08T09:55:23.332099705Z" level=info msg="starting plugins..." Jul 8 09:55:23.332193 containerd[1500]: time="2025-07-08T09:55:23.332121905Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 8 09:55:23.332536 containerd[1500]: time="2025-07-08T09:55:23.332512185Z" level=info msg="containerd successfully booted in 0.119377s" Jul 8 09:55:23.332600 systemd[1]: Started containerd.service - containerd container runtime. Jul 8 09:55:23.362696 tar[1498]: linux-arm64/README.md Jul 8 09:55:23.376249 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 8 09:55:24.071347 systemd-networkd[1436]: eth0: Gained IPv6LL Jul 8 09:55:24.075198 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 8 09:55:24.077335 systemd[1]: Reached target network-online.target - Network is Online. Jul 8 09:55:24.079858 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 8 09:55:24.082139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 09:55:24.096600 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 8 09:55:24.110655 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 8 09:55:24.110877 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 8 09:55:24.112191 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 8 09:55:24.115425 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 8 09:55:24.638168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 09:55:24.639474 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 8 09:55:24.642931 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 8 09:55:24.650559 systemd[1]: Startup finished in 2.020s (kernel) + 5.241s (initrd) + 3.331s (userspace) = 10.594s. Jul 8 09:55:25.041754 kubelet[1612]: E0708 09:55:25.041641 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 8 09:55:25.043822 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 8 09:55:25.043956 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 8 09:55:25.046245 systemd[1]: kubelet.service: Consumed 802ms CPU time, 260.7M memory peak. Jul 8 09:55:28.670385 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 8 09:55:28.671355 systemd[1]: Started sshd@0-10.0.0.112:22-10.0.0.1:52734.service - OpenSSH per-connection server daemon (10.0.0.1:52734). Jul 8 09:55:28.744364 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 52734 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:55:28.745938 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:55:28.751948 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 8 09:55:28.752823 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 8 09:55:28.758622 systemd-logind[1484]: New session 1 of user core. Jul 8 09:55:28.774190 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 8 09:55:28.776541 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 8 09:55:28.787902 (systemd)[1630]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 8 09:55:28.789951 systemd-logind[1484]: New session c1 of user core. Jul 8 09:55:28.881489 systemd[1630]: Queued start job for default target default.target. Jul 8 09:55:28.891639 systemd[1630]: Created slice app.slice - User Application Slice. Jul 8 09:55:28.891673 systemd[1630]: Reached target paths.target - Paths. Jul 8 09:55:28.891715 systemd[1630]: Reached target timers.target - Timers. Jul 8 09:55:28.893017 systemd[1630]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 8 09:55:28.902116 systemd[1630]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 8 09:55:28.902215 systemd[1630]: Reached target sockets.target - Sockets. Jul 8 09:55:28.902255 systemd[1630]: Reached target basic.target - Basic System. Jul 8 09:55:28.902282 systemd[1630]: Reached target default.target - Main User Target. Jul 8 09:55:28.902307 systemd[1630]: Startup finished in 107ms. Jul 8 09:55:28.902708 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 8 09:55:28.904214 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 8 09:55:28.973538 systemd[1]: Started sshd@1-10.0.0.112:22-10.0.0.1:52740.service - OpenSSH per-connection server daemon (10.0.0.1:52740). Jul 8 09:55:29.015040 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 52740 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:55:29.016284 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:55:29.019906 systemd-logind[1484]: New session 2 of user core. Jul 8 09:55:29.031293 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 8 09:55:29.082923 sshd[1644]: Connection closed by 10.0.0.1 port 52740 Jul 8 09:55:29.083542 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Jul 8 09:55:29.094132 systemd[1]: sshd@1-10.0.0.112:22-10.0.0.1:52740.service: Deactivated successfully. Jul 8 09:55:29.096640 systemd[1]: session-2.scope: Deactivated successfully. Jul 8 09:55:29.097318 systemd-logind[1484]: Session 2 logged out. Waiting for processes to exit. Jul 8 09:55:29.099436 systemd[1]: Started sshd@2-10.0.0.112:22-10.0.0.1:52756.service - OpenSSH per-connection server daemon (10.0.0.1:52756). Jul 8 09:55:29.100084 systemd-logind[1484]: Removed session 2. Jul 8 09:55:29.153249 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 52756 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:55:29.154324 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:55:29.157893 systemd-logind[1484]: New session 3 of user core. Jul 8 09:55:29.167285 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 8 09:55:29.213993 sshd[1653]: Connection closed by 10.0.0.1 port 52756 Jul 8 09:55:29.213862 sshd-session[1650]: pam_unix(sshd:session): session closed for user core Jul 8 09:55:29.224087 systemd[1]: sshd@2-10.0.0.112:22-10.0.0.1:52756.service: Deactivated successfully. Jul 8 09:55:29.226535 systemd[1]: session-3.scope: Deactivated successfully. Jul 8 09:55:29.227143 systemd-logind[1484]: Session 3 logged out. Waiting for processes to exit. Jul 8 09:55:29.229893 systemd[1]: Started sshd@3-10.0.0.112:22-10.0.0.1:52760.service - OpenSSH per-connection server daemon (10.0.0.1:52760). Jul 8 09:55:29.230566 systemd-logind[1484]: Removed session 3. Jul 8 09:55:29.282826 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 52760 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:55:29.283852 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:55:29.287981 systemd-logind[1484]: New session 4 of user core. Jul 8 09:55:29.293300 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 8 09:55:29.343262 sshd[1662]: Connection closed by 10.0.0.1 port 52760 Jul 8 09:55:29.343549 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Jul 8 09:55:29.354978 systemd[1]: sshd@3-10.0.0.112:22-10.0.0.1:52760.service: Deactivated successfully. Jul 8 09:55:29.356639 systemd[1]: session-4.scope: Deactivated successfully. Jul 8 09:55:29.358654 systemd-logind[1484]: Session 4 logged out. Waiting for processes to exit. Jul 8 09:55:29.360612 systemd-logind[1484]: Removed session 4. Jul 8 09:55:29.361901 systemd[1]: Started sshd@4-10.0.0.112:22-10.0.0.1:52762.service - OpenSSH per-connection server daemon (10.0.0.1:52762). Jul 8 09:55:29.420990 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 52762 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:55:29.421490 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:55:29.425996 systemd-logind[1484]: New session 5 of user core. Jul 8 09:55:29.436301 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 8 09:55:29.498594 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 8 09:55:29.498848 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 8 09:55:29.512983 sudo[1672]: pam_unix(sudo:session): session closed for user root Jul 8 09:55:29.514272 sshd[1671]: Connection closed by 10.0.0.1 port 52762 Jul 8 09:55:29.514779 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Jul 8 09:55:29.529061 systemd[1]: sshd@4-10.0.0.112:22-10.0.0.1:52762.service: Deactivated successfully. Jul 8 09:55:29.530610 systemd[1]: session-5.scope: Deactivated successfully. Jul 8 09:55:29.534290 systemd-logind[1484]: Session 5 logged out. Waiting for processes to exit. Jul 8 09:55:29.536276 systemd[1]: Started sshd@5-10.0.0.112:22-10.0.0.1:52766.service - OpenSSH per-connection server daemon (10.0.0.1:52766). Jul 8 09:55:29.536880 systemd-logind[1484]: Removed session 5. Jul 8 09:55:29.593908 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 52766 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:55:29.594942 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:55:29.599212 systemd-logind[1484]: New session 6 of user core. Jul 8 09:55:29.609355 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 8 09:55:29.659527 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 8 09:55:29.660013 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 8 09:55:29.733942 sudo[1683]: pam_unix(sudo:session): session closed for user root Jul 8 09:55:29.739015 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 8 09:55:29.739602 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 8 09:55:29.748854 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 8 09:55:29.780699 augenrules[1705]: No rules Jul 8 09:55:29.781725 systemd[1]: audit-rules.service: Deactivated successfully. Jul 8 09:55:29.781942 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 8 09:55:29.782835 sudo[1682]: pam_unix(sudo:session): session closed for user root Jul 8 09:55:29.784233 sshd[1681]: Connection closed by 10.0.0.1 port 52766 Jul 8 09:55:29.784452 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Jul 8 09:55:29.793033 systemd[1]: sshd@5-10.0.0.112:22-10.0.0.1:52766.service: Deactivated successfully. Jul 8 09:55:29.794321 systemd[1]: session-6.scope: Deactivated successfully. Jul 8 09:55:29.796306 systemd-logind[1484]: Session 6 logged out. Waiting for processes to exit. Jul 8 09:55:29.798282 systemd[1]: Started sshd@6-10.0.0.112:22-10.0.0.1:52782.service - OpenSSH per-connection server daemon (10.0.0.1:52782). Jul 8 09:55:29.798957 systemd-logind[1484]: Removed session 6. Jul 8 09:55:29.853822 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 52782 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:55:29.854835 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:55:29.858447 systemd-logind[1484]: New session 7 of user core. Jul 8 09:55:29.867346 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 8 09:55:29.916130 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 8 09:55:29.916660 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 8 09:55:30.261058 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 8 09:55:30.273545 (dockerd)[1738]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 8 09:55:30.523535 dockerd[1738]: time="2025-07-08T09:55:30.523414385Z" level=info msg="Starting up" Jul 8 09:55:30.524462 dockerd[1738]: time="2025-07-08T09:55:30.524437905Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 8 09:55:30.533727 dockerd[1738]: time="2025-07-08T09:55:30.533697625Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 8 09:55:30.562353 dockerd[1738]: time="2025-07-08T09:55:30.562312985Z" level=info msg="Loading containers: start." Jul 8 09:55:30.570187 kernel: Initializing XFRM netlink socket Jul 8 09:55:30.764808 systemd-networkd[1436]: docker0: Link UP Jul 8 09:55:30.767604 dockerd[1738]: time="2025-07-08T09:55:30.767567065Z" level=info msg="Loading containers: done." Jul 8 09:55:30.779422 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1154277439-merged.mount: Deactivated successfully. Jul 8 09:55:30.780972 dockerd[1738]: time="2025-07-08T09:55:30.780937945Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 8 09:55:30.781098 dockerd[1738]: time="2025-07-08T09:55:30.781080825Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 8 09:55:30.781250 dockerd[1738]: time="2025-07-08T09:55:30.781227745Z" level=info msg="Initializing buildkit" Jul 8 09:55:30.801511 dockerd[1738]: time="2025-07-08T09:55:30.801462425Z" level=info msg="Completed buildkit initialization" Jul 8 09:55:30.809503 dockerd[1738]: time="2025-07-08T09:55:30.809448945Z" level=info msg="Daemon has completed initialization" Jul 8 09:55:30.809649 dockerd[1738]: time="2025-07-08T09:55:30.809535465Z" level=info msg="API listen on /run/docker.sock" Jul 8 09:55:30.809669 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 8 09:55:31.330835 containerd[1500]: time="2025-07-08T09:55:31.330799985Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 8 09:55:31.961116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2330471368.mount: Deactivated successfully. Jul 8 09:55:33.129019 containerd[1500]: time="2025-07-08T09:55:33.128970625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:55:33.129505 containerd[1500]: time="2025-07-08T09:55:33.129474905Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 8 09:55:33.130271 containerd[1500]: time="2025-07-08T09:55:33.130242545Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:55:33.133262 containerd[1500]: time="2025-07-08T09:55:33.133211225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:55:33.134875 containerd[1500]: time="2025-07-08T09:55:33.134495465Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.80365796s" Jul 8 09:55:33.134875 containerd[1500]: time="2025-07-08T09:55:33.134533345Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 8 09:55:33.137888 containerd[1500]: time="2025-07-08T09:55:33.137863105Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 8 09:55:34.201688 containerd[1500]: time="2025-07-08T09:55:34.201481745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:55:34.202508 containerd[1500]: time="2025-07-08T09:55:34.202433585Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 8 09:55:34.203170 containerd[1500]: time="2025-07-08T09:55:34.203130625Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:55:34.206022 containerd[1500]: time="2025-07-08T09:55:34.205985945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:55:34.207055 containerd[1500]: time="2025-07-08T09:55:34.207020345Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.06904448s" Jul 8 09:55:34.207055 containerd[1500]: time="2025-07-08T09:55:34.207055345Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 8 09:55:34.207578 containerd[1500]: time="2025-07-08T09:55:34.207526585Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 8 09:55:35.282213 containerd[1500]: time="2025-07-08T09:55:35.282111505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:55:35.283074 containerd[1500]: time="2025-07-08T09:55:35.283021825Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 8 09:55:35.283826 containerd[1500]: time="2025-07-08T09:55:35.283790785Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:55:35.286629 containerd[1500]: time="2025-07-08T09:55:35.286594825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:55:35.288128 containerd[1500]: time="2025-07-08T09:55:35.288064545Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.08049528s" Jul 8 09:55:35.288185 containerd[1500]: time="2025-07-08T09:55:35.288125825Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 8 09:55:35.288576 containerd[1500]: time="2025-07-08T09:55:35.288546185Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 8 09:55:35.294304 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 8 09:55:35.295710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 09:55:35.437699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 09:55:35.441395 (kubelet)[2026]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 8 09:55:35.473006 kubelet[2026]: E0708 09:55:35.472952 2026 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 8 09:55:35.476406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 8 09:55:35.476544 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 8 09:55:35.477085 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107.5M memory peak. Jul 8 09:55:36.374138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1591495413.mount: Deactivated successfully. Jul 8 09:55:36.737110 containerd[1500]: time="2025-07-08T09:55:36.737000665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:55:36.737998 containerd[1500]: time="2025-07-08T09:55:36.737834825Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 8 09:55:36.738619 containerd[1500]: time="2025-07-08T09:55:36.738578425Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:55:36.740585 containerd[1500]: time="2025-07-08T09:55:36.740536825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:55:36.741009 containerd[1500]: time="2025-07-08T09:55:36.740974225Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.45239336s" Jul 8 09:55:36.741053 containerd[1500]: time="2025-07-08T09:55:36.741008465Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 8 09:55:36.741885 containerd[1500]: time="2025-07-08T09:55:36.741756745Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 8 09:55:37.260028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount830879060.mount: Deactivated successfully. Jul 8 09:55:38.041030 containerd[1500]: time="2025-07-08T09:55:38.040973505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:55:38.041835 containerd[1500]: time="2025-07-08T09:55:38.041792265Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 8 09:55:38.042383 containerd[1500]: time="2025-07-08T09:55:38.042357305Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:55:38.045654 containerd[1500]: time="2025-07-08T09:55:38.045614985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:55:38.046543 containerd[1500]: time="2025-07-08T09:55:38.046509825Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.30469104s" Jul 8 09:55:38.046582 containerd[1500]: time="2025-07-08T09:55:38.046543665Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 8 09:55:38.047190 containerd[1500]: time="2025-07-08T09:55:38.047032665Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 8 09:55:38.496131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4245293842.mount: Deactivated successfully. Jul 8 09:55:38.499868 containerd[1500]: time="2025-07-08T09:55:38.499633025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 8 09:55:38.500445 containerd[1500]: time="2025-07-08T09:55:38.500407825Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 8 09:55:38.501118 containerd[1500]: time="2025-07-08T09:55:38.501065745Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 8 09:55:38.503321 containerd[1500]: time="2025-07-08T09:55:38.503275105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 8 09:55:38.504113 containerd[1500]: time="2025-07-08T09:55:38.503935105Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 456.86956ms" Jul 8 09:55:38.504113 containerd[1500]: time="2025-07-08T09:55:38.503967065Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 8 09:55:38.504511 containerd[1500]: time="2025-07-08T09:55:38.504482625Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 8 09:55:38.970089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount676385370.mount: Deactivated successfully. Jul 8 09:55:40.650681 containerd[1500]: time="2025-07-08T09:55:40.650630985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:55:40.651203 containerd[1500]: time="2025-07-08T09:55:40.651171545Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 8 09:55:40.651973 containerd[1500]: time="2025-07-08T09:55:40.651927305Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:55:40.654436 containerd[1500]: time="2025-07-08T09:55:40.654379065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:55:40.655555 containerd[1500]: time="2025-07-08T09:55:40.655510865Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.15099652s" Jul 8 09:55:40.655616 containerd[1500]: time="2025-07-08T09:55:40.655539105Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 8 09:55:45.726751 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 8 09:55:45.728909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 09:55:45.850571 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 09:55:45.854387 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 8 09:55:45.890091 kubelet[2186]: E0708 09:55:45.890034 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 8 09:55:45.892658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 8 09:55:45.892790 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 8 09:55:45.894225 systemd[1]: kubelet.service: Consumed 136ms CPU time, 107.9M memory peak. Jul 8 09:55:46.974343 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 09:55:46.974517 systemd[1]: kubelet.service: Consumed 136ms CPU time, 107.9M memory peak. Jul 8 09:55:46.976391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 09:55:46.995405 systemd[1]: Reload requested from client PID 2201 ('systemctl') (unit session-7.scope)... Jul 8 09:55:46.995418 systemd[1]: Reloading... Jul 8 09:55:47.075299 zram_generator::config[2246]: No configuration found. Jul 8 09:55:47.235412 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 8 09:55:47.319222 systemd[1]: Reloading finished in 323 ms. Jul 8 09:55:47.386595 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 8 09:55:47.386664 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 8 09:55:47.386873 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 09:55:47.386909 systemd[1]: kubelet.service: Consumed 84ms CPU time, 95M memory peak. Jul 8 09:55:47.388199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 09:55:47.499182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 09:55:47.502601 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 8 09:55:47.534650 kubelet[2288]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 8 09:55:47.534650 kubelet[2288]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 8 09:55:47.534650 kubelet[2288]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 8 09:55:47.534933 kubelet[2288]: I0708 09:55:47.534688 2288 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 8 09:55:48.518232 kubelet[2288]: I0708 09:55:48.517879 2288 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 8 09:55:48.518232 kubelet[2288]: I0708 09:55:48.517910 2288 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 8 09:55:48.518232 kubelet[2288]: I0708 09:55:48.518130 2288 server.go:956] "Client rotation is on, will bootstrap in background" Jul 8 09:55:48.565832 kubelet[2288]: E0708 09:55:48.565770 2288 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 8 09:55:48.567610 kubelet[2288]: I0708 09:55:48.567553 2288 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 8 09:55:48.578871 kubelet[2288]: I0708 09:55:48.578852 2288 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 8 09:55:48.581565 kubelet[2288]: I0708 09:55:48.581542 2288 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 8 09:55:48.582547 kubelet[2288]: I0708 09:55:48.582500 2288 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 8 09:55:48.582695 kubelet[2288]: I0708 09:55:48.582539 2288 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 8 09:55:48.582778 kubelet[2288]: I0708 09:55:48.582746 2288 topology_manager.go:138] "Creating topology manager with none policy" Jul 8 09:55:48.582778 kubelet[2288]: I0708 09:55:48.582755 2288 container_manager_linux.go:303] "Creating device plugin manager" Jul 8 09:55:48.582951 kubelet[2288]: I0708 09:55:48.582928 2288 state_mem.go:36] "Initialized new in-memory state store" Jul 8 09:55:48.585424 kubelet[2288]: I0708 09:55:48.585391 2288 kubelet.go:480] "Attempting to sync node with API server" Jul 8 09:55:48.585424 kubelet[2288]: I0708 09:55:48.585424 2288 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 8 09:55:48.587250 kubelet[2288]: I0708 09:55:48.587220 2288 kubelet.go:386] "Adding apiserver pod source" Jul 8 09:55:48.588620 kubelet[2288]: I0708 09:55:48.588333 2288 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 8 09:55:48.592744 kubelet[2288]: I0708 09:55:48.592132 2288 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 8 09:55:48.592744 kubelet[2288]: E0708 09:55:48.592697 2288 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 8 09:55:48.592744 kubelet[2288]: E0708 09:55:48.592702 2288 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 8 09:55:48.592874 kubelet[2288]: I0708 09:55:48.592826 2288 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 8 09:55:48.593023 kubelet[2288]: W0708 09:55:48.592995 2288 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 8 09:55:48.595599 kubelet[2288]: I0708 09:55:48.595565 2288 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 8 09:55:48.595599 kubelet[2288]: I0708 09:55:48.595602 2288 server.go:1289] "Started kubelet" Jul 8 09:55:48.595761 kubelet[2288]: I0708 09:55:48.595726 2288 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 8 09:55:48.596789 kubelet[2288]: I0708 09:55:48.596768 2288 server.go:317] "Adding debug handlers to kubelet server" Jul 8 09:55:48.597792 kubelet[2288]: I0708 09:55:48.597632 2288 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 8 09:55:48.597946 kubelet[2288]: I0708 09:55:48.597919 2288 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 8 09:55:48.600143 kubelet[2288]: I0708 09:55:48.600118 2288 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 8 09:55:48.600236 kubelet[2288]: E0708 09:55:48.600221 2288 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 8 09:55:48.600472 kubelet[2288]: I0708 09:55:48.600457 2288 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 8 09:55:48.601488 kubelet[2288]: E0708 09:55:48.601465 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 8 09:55:48.601534 kubelet[2288]: I0708 09:55:48.601494 2288 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 8 09:55:48.601683 kubelet[2288]: I0708 09:55:48.601660 2288 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 8 09:55:48.601732 kubelet[2288]: I0708 09:55:48.601716 2288 reconciler.go:26] "Reconciler: start to sync state" Jul 8 09:55:48.602106 kubelet[2288]: E0708 09:55:48.602079 2288 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 8 09:55:48.605335 kubelet[2288]: E0708 09:55:48.598456 2288 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.112:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.112:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18503e1cd4b239f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-08 09:55:48.595579385 +0000 UTC m=+1.089972001,LastTimestamp:2025-07-08 09:55:48.595579385 +0000 UTC m=+1.089972001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 8 09:55:48.605846 kubelet[2288]: E0708 09:55:48.605809 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="200ms" Jul 8 09:55:48.605993 kubelet[2288]: I0708 09:55:48.605979 2288 factory.go:223] Registration of the systemd container factory successfully Jul 8 09:55:48.606090 kubelet[2288]: I0708 09:55:48.606069 2288 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 8 09:55:48.606974 kubelet[2288]: I0708 09:55:48.606952 2288 factory.go:223] Registration of the containerd container factory successfully Jul 8 09:55:48.616481 kubelet[2288]: I0708 09:55:48.616454 2288 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 8 09:55:48.616481 kubelet[2288]: I0708 09:55:48.616471 2288 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 8 09:55:48.616580 kubelet[2288]: I0708 09:55:48.616487 2288 state_mem.go:36] "Initialized new in-memory state store" Jul 8 09:55:48.697641 kubelet[2288]: I0708 09:55:48.697347 2288 policy_none.go:49] "None policy: Start" Jul 8 09:55:48.697641 kubelet[2288]: I0708 09:55:48.697375 2288 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 8 09:55:48.697641 kubelet[2288]: I0708 09:55:48.697387 2288 state_mem.go:35] "Initializing new in-memory state store" Jul 8 09:55:48.702525 kubelet[2288]: E0708 09:55:48.701668 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 8 09:55:48.702525 kubelet[2288]: I0708 09:55:48.702119 2288 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 8 09:55:48.703604 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 8 09:55:48.703871 kubelet[2288]: I0708 09:55:48.703856 2288 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 8 09:55:48.703937 kubelet[2288]: I0708 09:55:48.703929 2288 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 8 09:55:48.703992 kubelet[2288]: I0708 09:55:48.703983 2288 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 8 09:55:48.704088 kubelet[2288]: I0708 09:55:48.704073 2288 kubelet.go:2436] "Starting kubelet main sync loop" Jul 8 09:55:48.704229 kubelet[2288]: E0708 09:55:48.704213 2288 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 8 09:55:48.705482 kubelet[2288]: E0708 09:55:48.705454 2288 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 8 09:55:48.714996 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 8 09:55:48.726696 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 8 09:55:48.727880 kubelet[2288]: E0708 09:55:48.727839 2288 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 8 09:55:48.728033 kubelet[2288]: I0708 09:55:48.728009 2288 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 8 09:55:48.728060 kubelet[2288]: I0708 09:55:48.728024 2288 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 8 09:55:48.728410 kubelet[2288]: I0708 09:55:48.728355 2288 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 8 09:55:48.728987 kubelet[2288]: E0708 09:55:48.728921 2288 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 8 09:55:48.728987 kubelet[2288]: E0708 09:55:48.728961 2288 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 8 09:55:48.806464 kubelet[2288]: E0708 09:55:48.806365 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="400ms" Jul 8 09:55:48.815676 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 8 09:55:48.829817 kubelet[2288]: I0708 09:55:48.829791 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 8 09:55:48.830263 kubelet[2288]: E0708 09:55:48.830235 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Jul 8 09:55:48.834942 kubelet[2288]: E0708 09:55:48.834920 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 8 09:55:48.836620 systemd[1]: Created slice kubepods-burstable-pod00d49d82d1d334fe7ae4d6de712b1169.slice - libcontainer container kubepods-burstable-pod00d49d82d1d334fe7ae4d6de712b1169.slice. Jul 8 09:55:48.857592 kubelet[2288]: E0708 09:55:48.857452 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 8 09:55:48.859774 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 8 09:55:48.861198 kubelet[2288]: E0708 09:55:48.861172 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 8 09:55:48.903361 kubelet[2288]: I0708 09:55:48.903332 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00d49d82d1d334fe7ae4d6de712b1169-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"00d49d82d1d334fe7ae4d6de712b1169\") " pod="kube-system/kube-apiserver-localhost" Jul 8 09:55:48.903424 kubelet[2288]: I0708 09:55:48.903369 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 09:55:48.903424 kubelet[2288]: I0708 09:55:48.903391 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 09:55:48.903424 kubelet[2288]: I0708 09:55:48.903414 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 09:55:48.903488 kubelet[2288]: I0708 09:55:48.903429 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 8 09:55:48.903488 kubelet[2288]: I0708 09:55:48.903444 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00d49d82d1d334fe7ae4d6de712b1169-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"00d49d82d1d334fe7ae4d6de712b1169\") " pod="kube-system/kube-apiserver-localhost" Jul 8 09:55:48.903488 kubelet[2288]: I0708 09:55:48.903457 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00d49d82d1d334fe7ae4d6de712b1169-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"00d49d82d1d334fe7ae4d6de712b1169\") " pod="kube-system/kube-apiserver-localhost" Jul 8 09:55:48.903488 kubelet[2288]: I0708 09:55:48.903471 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 09:55:48.903563 kubelet[2288]: I0708 09:55:48.903491 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 09:55:49.031709 kubelet[2288]: I0708 09:55:49.031676 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 8 09:55:49.032096 kubelet[2288]: E0708 09:55:49.032057 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Jul 8 09:55:49.136330 containerd[1500]: time="2025-07-08T09:55:49.136234585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 8 09:55:49.153087 containerd[1500]: time="2025-07-08T09:55:49.153003345Z" level=info msg="connecting to shim f154a3e9e81728f74dc8584d524a809037a908e69d2a69083fcabb1f724ae5c7" address="unix:///run/containerd/s/9a9a7e4f96486dad7585c5ebe339de1c1021de77658a9ea7884efdbc23425d32" namespace=k8s.io protocol=ttrpc version=3 Jul 8 09:55:49.159070 containerd[1500]: time="2025-07-08T09:55:49.158860385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:00d49d82d1d334fe7ae4d6de712b1169,Namespace:kube-system,Attempt:0,}" Jul 8 09:55:49.162823 containerd[1500]: time="2025-07-08T09:55:49.162784545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 8 09:55:49.176373 systemd[1]: Started cri-containerd-f154a3e9e81728f74dc8584d524a809037a908e69d2a69083fcabb1f724ae5c7.scope - libcontainer container f154a3e9e81728f74dc8584d524a809037a908e69d2a69083fcabb1f724ae5c7. Jul 8 09:55:49.183559 containerd[1500]: time="2025-07-08T09:55:49.183524345Z" level=info msg="connecting to shim 35adf6ac53fb4d2eef0913a5f780f07f55f502f7e289aff18e02a7159aa853c9" address="unix:///run/containerd/s/565633f3e91d15a9cb94805d10b0aec3a4d01cf1ba0bba949e008df1a0b01314" namespace=k8s.io protocol=ttrpc version=3 Jul 8 09:55:49.188180 containerd[1500]: time="2025-07-08T09:55:49.187788825Z" level=info msg="connecting to shim 4ba79cbac7ae0eb52d1d75b1cf3c021c7065a638a1239e5332affc2113bde810" address="unix:///run/containerd/s/fb2f710481e5bb9d81c7b147273a8d7996c521cbdde869b79cf5527c5852f566" namespace=k8s.io protocol=ttrpc version=3 Jul 8 09:55:49.206301 systemd[1]: Started cri-containerd-35adf6ac53fb4d2eef0913a5f780f07f55f502f7e289aff18e02a7159aa853c9.scope - libcontainer container 35adf6ac53fb4d2eef0913a5f780f07f55f502f7e289aff18e02a7159aa853c9. Jul 8 09:55:49.207138 kubelet[2288]: E0708 09:55:49.207111 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="800ms" Jul 8 09:55:49.209908 systemd[1]: Started cri-containerd-4ba79cbac7ae0eb52d1d75b1cf3c021c7065a638a1239e5332affc2113bde810.scope - libcontainer container 4ba79cbac7ae0eb52d1d75b1cf3c021c7065a638a1239e5332affc2113bde810. Jul 8 09:55:49.211717 containerd[1500]: time="2025-07-08T09:55:49.211663385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"f154a3e9e81728f74dc8584d524a809037a908e69d2a69083fcabb1f724ae5c7\"" Jul 8 09:55:49.217532 containerd[1500]: time="2025-07-08T09:55:49.217489305Z" level=info msg="CreateContainer within sandbox \"f154a3e9e81728f74dc8584d524a809037a908e69d2a69083fcabb1f724ae5c7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 8 09:55:49.224516 containerd[1500]: time="2025-07-08T09:55:49.224422905Z" level=info msg="Container 34ccad77b3e5f54c1f44a46388a0577b32bf2711484900b5eb4ba7aae8ccf560: CDI devices from CRI Config.CDIDevices: []" Jul 8 09:55:49.230101 containerd[1500]: time="2025-07-08T09:55:49.230071985Z" level=info msg="CreateContainer within sandbox \"f154a3e9e81728f74dc8584d524a809037a908e69d2a69083fcabb1f724ae5c7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"34ccad77b3e5f54c1f44a46388a0577b32bf2711484900b5eb4ba7aae8ccf560\"" Jul 8 09:55:49.231715 containerd[1500]: time="2025-07-08T09:55:49.231686745Z" level=info msg="StartContainer for \"34ccad77b3e5f54c1f44a46388a0577b32bf2711484900b5eb4ba7aae8ccf560\"" Jul 8 09:55:49.232776 containerd[1500]: time="2025-07-08T09:55:49.232738265Z" level=info msg="connecting to shim 34ccad77b3e5f54c1f44a46388a0577b32bf2711484900b5eb4ba7aae8ccf560" address="unix:///run/containerd/s/9a9a7e4f96486dad7585c5ebe339de1c1021de77658a9ea7884efdbc23425d32" protocol=ttrpc version=3 Jul 8 09:55:49.249335 containerd[1500]: time="2025-07-08T09:55:49.249276105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:00d49d82d1d334fe7ae4d6de712b1169,Namespace:kube-system,Attempt:0,} returns sandbox id \"35adf6ac53fb4d2eef0913a5f780f07f55f502f7e289aff18e02a7159aa853c9\"" Jul 8 09:55:49.250127 containerd[1500]: time="2025-07-08T09:55:49.250106545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ba79cbac7ae0eb52d1d75b1cf3c021c7065a638a1239e5332affc2113bde810\"" Jul 8 09:55:49.254498 systemd[1]: Started cri-containerd-34ccad77b3e5f54c1f44a46388a0577b32bf2711484900b5eb4ba7aae8ccf560.scope - libcontainer container 34ccad77b3e5f54c1f44a46388a0577b32bf2711484900b5eb4ba7aae8ccf560. Jul 8 09:55:49.255354 containerd[1500]: time="2025-07-08T09:55:49.255326945Z" level=info msg="CreateContainer within sandbox \"35adf6ac53fb4d2eef0913a5f780f07f55f502f7e289aff18e02a7159aa853c9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 8 09:55:49.256383 containerd[1500]: time="2025-07-08T09:55:49.256332625Z" level=info msg="CreateContainer within sandbox \"4ba79cbac7ae0eb52d1d75b1cf3c021c7065a638a1239e5332affc2113bde810\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 8 09:55:49.264098 containerd[1500]: time="2025-07-08T09:55:49.264033825Z" level=info msg="Container b6031bf873c10245069fa3371ca59d97637d776dd6e1d1f23bf504cbca628fc4: CDI devices from CRI Config.CDIDevices: []" Jul 8 09:55:49.266866 containerd[1500]: time="2025-07-08T09:55:49.266831105Z" level=info msg="Container 98908da0bda4b0b20df2948bc811276857156f5139c43b2d8f256917fe99520f: CDI devices from CRI Config.CDIDevices: []" Jul 8 09:55:49.274167 containerd[1500]: time="2025-07-08T09:55:49.274062705Z" level=info msg="CreateContainer within sandbox \"35adf6ac53fb4d2eef0913a5f780f07f55f502f7e289aff18e02a7159aa853c9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b6031bf873c10245069fa3371ca59d97637d776dd6e1d1f23bf504cbca628fc4\"" Jul 8 09:55:49.274475 containerd[1500]: time="2025-07-08T09:55:49.274417025Z" level=info msg="CreateContainer within sandbox \"4ba79cbac7ae0eb52d1d75b1cf3c021c7065a638a1239e5332affc2113bde810\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"98908da0bda4b0b20df2948bc811276857156f5139c43b2d8f256917fe99520f\"" Jul 8 09:55:49.274623 containerd[1500]: time="2025-07-08T09:55:49.274589225Z" level=info msg="StartContainer for \"b6031bf873c10245069fa3371ca59d97637d776dd6e1d1f23bf504cbca628fc4\"" Jul 8 09:55:49.275501 containerd[1500]: time="2025-07-08T09:55:49.275451425Z" level=info msg="StartContainer for \"98908da0bda4b0b20df2948bc811276857156f5139c43b2d8f256917fe99520f\"" Jul 8 09:55:49.276374 containerd[1500]: time="2025-07-08T09:55:49.276342185Z" level=info msg="connecting to shim b6031bf873c10245069fa3371ca59d97637d776dd6e1d1f23bf504cbca628fc4" address="unix:///run/containerd/s/565633f3e91d15a9cb94805d10b0aec3a4d01cf1ba0bba949e008df1a0b01314" protocol=ttrpc version=3 Jul 8 09:55:49.276481 containerd[1500]: time="2025-07-08T09:55:49.276344665Z" level=info msg="connecting to shim 98908da0bda4b0b20df2948bc811276857156f5139c43b2d8f256917fe99520f" address="unix:///run/containerd/s/fb2f710481e5bb9d81c7b147273a8d7996c521cbdde869b79cf5527c5852f566" protocol=ttrpc version=3 Jul 8 09:55:49.291691 containerd[1500]: time="2025-07-08T09:55:49.291587825Z" level=info msg="StartContainer for \"34ccad77b3e5f54c1f44a46388a0577b32bf2711484900b5eb4ba7aae8ccf560\" returns successfully" Jul 8 09:55:49.302299 systemd[1]: Started cri-containerd-98908da0bda4b0b20df2948bc811276857156f5139c43b2d8f256917fe99520f.scope - libcontainer container 98908da0bda4b0b20df2948bc811276857156f5139c43b2d8f256917fe99520f. Jul 8 09:55:49.303386 systemd[1]: Started cri-containerd-b6031bf873c10245069fa3371ca59d97637d776dd6e1d1f23bf504cbca628fc4.scope - libcontainer container b6031bf873c10245069fa3371ca59d97637d776dd6e1d1f23bf504cbca628fc4. Jul 8 09:55:49.350453 containerd[1500]: time="2025-07-08T09:55:49.350416185Z" level=info msg="StartContainer for \"b6031bf873c10245069fa3371ca59d97637d776dd6e1d1f23bf504cbca628fc4\" returns successfully" Jul 8 09:55:49.350824 containerd[1500]: time="2025-07-08T09:55:49.350801305Z" level=info msg="StartContainer for \"98908da0bda4b0b20df2948bc811276857156f5139c43b2d8f256917fe99520f\" returns successfully" Jul 8 09:55:49.433573 kubelet[2288]: I0708 09:55:49.433489 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 8 09:55:49.433836 kubelet[2288]: E0708 09:55:49.433806 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Jul 8 09:55:49.715255 kubelet[2288]: E0708 09:55:49.715167 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 8 09:55:49.716919 kubelet[2288]: E0708 09:55:49.716898 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 8 09:55:49.718587 kubelet[2288]: E0708 09:55:49.718568 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 8 09:55:50.235912 kubelet[2288]: I0708 09:55:50.235699 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 8 09:55:50.722180 kubelet[2288]: E0708 09:55:50.721649 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 8 09:55:50.722180 kubelet[2288]: E0708 09:55:50.721940 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 8 09:55:51.798533 kubelet[2288]: E0708 09:55:51.798489 2288 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 8 09:55:51.975300 kubelet[2288]: I0708 09:55:51.975245 2288 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 8 09:55:52.002956 kubelet[2288]: I0708 09:55:52.002922 2288 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 8 09:55:52.007574 kubelet[2288]: E0708 09:55:52.007538 2288 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 8 09:55:52.007574 kubelet[2288]: I0708 09:55:52.007562 2288 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 8 09:55:52.009089 kubelet[2288]: E0708 09:55:52.009057 2288 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 8 09:55:52.009089 kubelet[2288]: I0708 09:55:52.009078 2288 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 8 09:55:52.010665 kubelet[2288]: E0708 09:55:52.010642 2288 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 8 09:55:52.592942 kubelet[2288]: I0708 09:55:52.592894 2288 apiserver.go:52] "Watching apiserver" Jul 8 09:55:52.601930 kubelet[2288]: I0708 09:55:52.601903 2288 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 8 09:55:53.285721 kubelet[2288]: I0708 09:55:53.285684 2288 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 8 09:55:53.963290 systemd[1]: Reload requested from client PID 2569 ('systemctl') (unit session-7.scope)... Jul 8 09:55:53.963306 systemd[1]: Reloading... Jul 8 09:55:54.038407 zram_generator::config[2612]: No configuration found. Jul 8 09:55:54.103226 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 8 09:55:54.200229 systemd[1]: Reloading finished in 236 ms. Jul 8 09:55:54.230059 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 09:55:54.243042 systemd[1]: kubelet.service: Deactivated successfully. Jul 8 09:55:54.244236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 09:55:54.244292 systemd[1]: kubelet.service: Consumed 1.501s CPU time, 128.4M memory peak. Jul 8 09:55:54.245801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 09:55:54.372016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 09:55:54.376712 (kubelet)[2654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 8 09:55:54.460302 kubelet[2654]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 8 09:55:54.460302 kubelet[2654]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 8 09:55:54.460302 kubelet[2654]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 8 09:55:54.460622 kubelet[2654]: I0708 09:55:54.460328 2654 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 8 09:55:54.468179 kubelet[2654]: I0708 09:55:54.467276 2654 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 8 09:55:54.468179 kubelet[2654]: I0708 09:55:54.467304 2654 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 8 09:55:54.468179 kubelet[2654]: I0708 09:55:54.467478 2654 server.go:956] "Client rotation is on, will bootstrap in background" Jul 8 09:55:54.468808 kubelet[2654]: I0708 09:55:54.468786 2654 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 8 09:55:54.471029 kubelet[2654]: I0708 09:55:54.470990 2654 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 8 09:55:54.474226 kubelet[2654]: I0708 09:55:54.474206 2654 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 8 09:55:54.478910 kubelet[2654]: I0708 09:55:54.478854 2654 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 8 09:55:54.479091 kubelet[2654]: I0708 09:55:54.479053 2654 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 8 09:55:54.479232 kubelet[2654]: I0708 09:55:54.479080 2654 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 8 09:55:54.479310 kubelet[2654]: I0708 09:55:54.479237 2654 topology_manager.go:138] "Creating topology manager with none policy" Jul 8 09:55:54.479310 kubelet[2654]: I0708 09:55:54.479245 2654 container_manager_linux.go:303] "Creating device plugin manager" Jul 8 09:55:54.479310 kubelet[2654]: I0708 09:55:54.479283 2654 state_mem.go:36] "Initialized new in-memory state store" Jul 8 09:55:54.479446 kubelet[2654]: I0708 09:55:54.479434 2654 kubelet.go:480] "Attempting to sync node with API server" Jul 8 09:55:54.479470 kubelet[2654]: I0708 09:55:54.479449 2654 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 8 09:55:54.479470 kubelet[2654]: I0708 09:55:54.479469 2654 kubelet.go:386] "Adding apiserver pod source" Jul 8 09:55:54.479519 kubelet[2654]: I0708 09:55:54.479481 2654 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 8 09:55:54.481285 kubelet[2654]: I0708 09:55:54.481216 2654 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 8 09:55:54.481834 kubelet[2654]: I0708 09:55:54.481815 2654 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 8 09:55:54.483505 kubelet[2654]: I0708 09:55:54.483484 2654 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 8 09:55:54.483573 kubelet[2654]: I0708 09:55:54.483533 2654 server.go:1289] "Started kubelet" Jul 8 09:55:54.483658 kubelet[2654]: I0708 09:55:54.483632 2654 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 8 09:55:54.483846 kubelet[2654]: I0708 09:55:54.483806 2654 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 8 09:55:54.484041 kubelet[2654]: I0708 09:55:54.484016 2654 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 8 09:55:54.484704 kubelet[2654]: I0708 09:55:54.484682 2654 server.go:317] "Adding debug handlers to kubelet server" Jul 8 09:55:54.491164 kubelet[2654]: E0708 09:55:54.488407 2654 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 8 09:55:54.491164 kubelet[2654]: I0708 09:55:54.488696 2654 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 8 09:55:54.491164 kubelet[2654]: I0708 09:55:54.488930 2654 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 8 09:55:54.492711 kubelet[2654]: E0708 09:55:54.492445 2654 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 8 09:55:54.492711 kubelet[2654]: I0708 09:55:54.492484 2654 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 8 09:55:54.492711 kubelet[2654]: I0708 09:55:54.492653 2654 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 8 09:55:54.492938 kubelet[2654]: I0708 09:55:54.492914 2654 reconciler.go:26] "Reconciler: start to sync state" Jul 8 09:55:54.495406 kubelet[2654]: I0708 09:55:54.495381 2654 factory.go:223] Registration of the systemd container factory successfully Jul 8 09:55:54.495580 kubelet[2654]: I0708 09:55:54.495559 2654 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 8 09:55:54.502055 kubelet[2654]: I0708 09:55:54.502026 2654 factory.go:223] Registration of the containerd container factory successfully Jul 8 09:55:54.502625 kubelet[2654]: I0708 09:55:54.502509 2654 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 8 09:55:54.504194 kubelet[2654]: I0708 09:55:54.503706 2654 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 8 09:55:54.504194 kubelet[2654]: I0708 09:55:54.503727 2654 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 8 09:55:54.504194 kubelet[2654]: I0708 09:55:54.503743 2654 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 8 09:55:54.504194 kubelet[2654]: I0708 09:55:54.503751 2654 kubelet.go:2436] "Starting kubelet main sync loop" Jul 8 09:55:54.504194 kubelet[2654]: E0708 09:55:54.503787 2654 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 8 09:55:54.534420 kubelet[2654]: I0708 09:55:54.534391 2654 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 8 09:55:54.534420 kubelet[2654]: I0708 09:55:54.534410 2654 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 8 09:55:54.534420 kubelet[2654]: I0708 09:55:54.534428 2654 state_mem.go:36] "Initialized new in-memory state store" Jul 8 09:55:54.534603 kubelet[2654]: I0708 09:55:54.534534 2654 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 8 09:55:54.534603 kubelet[2654]: I0708 09:55:54.534543 2654 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 8 09:55:54.534603 kubelet[2654]: I0708 09:55:54.534557 2654 policy_none.go:49] "None policy: Start" Jul 8 09:55:54.534603 kubelet[2654]: I0708 09:55:54.534565 2654 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 8 09:55:54.534603 kubelet[2654]: I0708 09:55:54.534573 2654 state_mem.go:35] "Initializing new in-memory state store" Jul 8 09:55:54.534695 kubelet[2654]: I0708 09:55:54.534649 2654 state_mem.go:75] "Updated machine memory state" Jul 8 09:55:54.537952 kubelet[2654]: E0708 09:55:54.537911 2654 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 8 09:55:54.538131 kubelet[2654]: I0708 09:55:54.538111 2654 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 8 09:55:54.538181 kubelet[2654]: I0708 09:55:54.538131 2654 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 8 09:55:54.538365 kubelet[2654]: I0708 09:55:54.538342 2654 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 8 09:55:54.539356 kubelet[2654]: E0708 09:55:54.539257 2654 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 8 09:55:54.604867 kubelet[2654]: I0708 09:55:54.604833 2654 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 8 09:55:54.605156 kubelet[2654]: I0708 09:55:54.605139 2654 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 8 09:55:54.605266 kubelet[2654]: I0708 09:55:54.605253 2654 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 8 09:55:54.610134 kubelet[2654]: E0708 09:55:54.610109 2654 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 8 09:55:54.639282 kubelet[2654]: I0708 09:55:54.639244 2654 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 8 09:55:54.644758 kubelet[2654]: I0708 09:55:54.644718 2654 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 8 09:55:54.644861 kubelet[2654]: I0708 09:55:54.644792 2654 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 8 09:55:54.794784 kubelet[2654]: I0708 09:55:54.794663 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00d49d82d1d334fe7ae4d6de712b1169-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"00d49d82d1d334fe7ae4d6de712b1169\") " pod="kube-system/kube-apiserver-localhost" Jul 8 09:55:54.794784 kubelet[2654]: I0708 09:55:54.794713 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00d49d82d1d334fe7ae4d6de712b1169-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"00d49d82d1d334fe7ae4d6de712b1169\") " pod="kube-system/kube-apiserver-localhost" Jul 8 09:55:54.794784 kubelet[2654]: I0708 09:55:54.794759 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 09:55:54.794950 kubelet[2654]: I0708 09:55:54.794788 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 8 09:55:54.794950 kubelet[2654]: I0708 09:55:54.794812 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00d49d82d1d334fe7ae4d6de712b1169-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"00d49d82d1d334fe7ae4d6de712b1169\") " pod="kube-system/kube-apiserver-localhost" Jul 8 09:55:54.794950 kubelet[2654]: I0708 09:55:54.794837 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 09:55:54.794950 kubelet[2654]: I0708 09:55:54.794851 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 09:55:54.794950 kubelet[2654]: I0708 09:55:54.794864 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 09:55:54.795049 kubelet[2654]: I0708 09:55:54.794877 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 09:55:54.964792 sudo[2694]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 8 09:55:54.965399 sudo[2694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 8 09:55:55.297397 sudo[2694]: pam_unix(sudo:session): session closed for user root Jul 8 09:55:55.480699 kubelet[2654]: I0708 09:55:55.480615 2654 apiserver.go:52] "Watching apiserver" Jul 8 09:55:55.493470 kubelet[2654]: I0708 09:55:55.493426 2654 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 8 09:55:55.523145 kubelet[2654]: I0708 09:55:55.522973 2654 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 8 09:55:55.523257 kubelet[2654]: I0708 09:55:55.523149 2654 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 8 09:55:55.529188 kubelet[2654]: E0708 09:55:55.528980 2654 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 8 09:55:55.529188 kubelet[2654]: E0708 09:55:55.529018 2654 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 8 09:55:55.538323 kubelet[2654]: I0708 09:55:55.538243 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.538224465 podStartE2EDuration="2.538224465s" podCreationTimestamp="2025-07-08 09:55:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 09:55:55.538042265 +0000 UTC m=+1.157751241" watchObservedRunningTime="2025-07-08 09:55:55.538224465 +0000 UTC m=+1.157933401" Jul 8 09:55:55.551910 kubelet[2654]: I0708 09:55:55.551795 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5517797450000002 podStartE2EDuration="1.551779745s" podCreationTimestamp="2025-07-08 09:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 09:55:55.545575145 +0000 UTC m=+1.165284121" watchObservedRunningTime="2025-07-08 09:55:55.551779745 +0000 UTC m=+1.171488721" Jul 8 09:55:55.562504 kubelet[2654]: I0708 09:55:55.562433 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.562418905 podStartE2EDuration="1.562418905s" podCreationTimestamp="2025-07-08 09:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 09:55:55.552219105 +0000 UTC m=+1.171928121" watchObservedRunningTime="2025-07-08 09:55:55.562418905 +0000 UTC m=+1.182127881" Jul 8 09:55:56.796731 sudo[1718]: pam_unix(sudo:session): session closed for user root Jul 8 09:55:56.797864 sshd[1717]: Connection closed by 10.0.0.1 port 52782 Jul 8 09:55:56.798680 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Jul 8 09:55:56.802072 systemd[1]: sshd@6-10.0.0.112:22-10.0.0.1:52782.service: Deactivated successfully. Jul 8 09:55:56.804285 systemd[1]: session-7.scope: Deactivated successfully. Jul 8 09:55:56.804677 systemd[1]: session-7.scope: Consumed 8.391s CPU time, 256.4M memory peak. Jul 8 09:55:56.807495 systemd-logind[1484]: Session 7 logged out. Waiting for processes to exit. Jul 8 09:55:56.809209 systemd-logind[1484]: Removed session 7. Jul 8 09:55:59.970522 kubelet[2654]: I0708 09:55:59.970429 2654 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 8 09:55:59.971363 containerd[1500]: time="2025-07-08T09:55:59.971319314Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 8 09:55:59.971596 kubelet[2654]: I0708 09:55:59.971517 2654 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 8 09:56:00.886459 systemd[1]: Created slice kubepods-besteffort-pod0681f09e_a960_49c2_a3de_10bb52457734.slice - libcontainer container kubepods-besteffort-pod0681f09e_a960_49c2_a3de_10bb52457734.slice. Jul 8 09:56:00.908335 systemd[1]: Created slice kubepods-burstable-pod74debb58_4bc0_4cb2_83a1_0963dd5e525d.slice - libcontainer container kubepods-burstable-pod74debb58_4bc0_4cb2_83a1_0963dd5e525d.slice. Jul 8 09:56:00.934022 kubelet[2654]: I0708 09:56:00.933978 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-cilium-cgroup\") pod \"cilium-hj4l7\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " pod="kube-system/cilium-hj4l7" Jul 8 09:56:00.934138 kubelet[2654]: I0708 09:56:00.934055 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-cni-path\") pod \"cilium-hj4l7\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " pod="kube-system/cilium-hj4l7" Jul 8 09:56:00.934138 kubelet[2654]: I0708 09:56:00.934074 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-host-proc-sys-kernel\") pod \"cilium-hj4l7\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " pod="kube-system/cilium-hj4l7" Jul 8 09:56:00.934138 kubelet[2654]: I0708 09:56:00.934101 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2gnn\" (UniqueName: \"kubernetes.io/projected/74debb58-4bc0-4cb2-83a1-0963dd5e525d-kube-api-access-g2gnn\") pod \"cilium-hj4l7\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " pod="kube-system/cilium-hj4l7" Jul 8 09:56:00.934138 kubelet[2654]: I0708 09:56:00.934120 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74debb58-4bc0-4cb2-83a1-0963dd5e525d-hubble-tls\") pod \"cilium-hj4l7\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " pod="kube-system/cilium-hj4l7" Jul 8 09:56:00.934138 kubelet[2654]: I0708 09:56:00.934135 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0681f09e-a960-49c2-a3de-10bb52457734-xtables-lock\") pod \"kube-proxy-pq5gw\" (UID: \"0681f09e-a960-49c2-a3de-10bb52457734\") " pod="kube-system/kube-proxy-pq5gw" Jul 8 09:56:00.934277 kubelet[2654]: I0708 09:56:00.934204 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0681f09e-a960-49c2-a3de-10bb52457734-lib-modules\") pod \"kube-proxy-pq5gw\" (UID: \"0681f09e-a960-49c2-a3de-10bb52457734\") " pod="kube-system/kube-proxy-pq5gw" Jul 8 09:56:00.934277 kubelet[2654]: I0708 09:56:00.934222 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-etc-cni-netd\") pod \"cilium-hj4l7\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " pod="kube-system/cilium-hj4l7" Jul 8 09:56:00.934277 kubelet[2654]: I0708 09:56:00.934235 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-lib-modules\") pod \"cilium-hj4l7\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " pod="kube-system/cilium-hj4l7" Jul 8 09:56:00.934342 kubelet[2654]: I0708 09:56:00.934278 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-xtables-lock\") pod \"cilium-hj4l7\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " pod="kube-system/cilium-hj4l7" Jul 8 09:56:00.934342 kubelet[2654]: I0708 09:56:00.934334 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0681f09e-a960-49c2-a3de-10bb52457734-kube-proxy\") pod \"kube-proxy-pq5gw\" (UID: \"0681f09e-a960-49c2-a3de-10bb52457734\") " pod="kube-system/kube-proxy-pq5gw" Jul 8 09:56:00.934391 kubelet[2654]: I0708 09:56:00.934351 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw4sx\" (UniqueName: \"kubernetes.io/projected/0681f09e-a960-49c2-a3de-10bb52457734-kube-api-access-mw4sx\") pod \"kube-proxy-pq5gw\" (UID: \"0681f09e-a960-49c2-a3de-10bb52457734\") " pod="kube-system/kube-proxy-pq5gw" Jul 8 09:56:00.934391 kubelet[2654]: I0708 09:56:00.934370 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-hostproc\") pod \"cilium-hj4l7\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " pod="kube-system/cilium-hj4l7" Jul 8 09:56:00.934437 kubelet[2654]: I0708 09:56:00.934415 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74debb58-4bc0-4cb2-83a1-0963dd5e525d-clustermesh-secrets\") pod \"cilium-hj4l7\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " pod="kube-system/cilium-hj4l7" Jul 8 09:56:00.934437 kubelet[2654]: I0708 09:56:00.934430 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74debb58-4bc0-4cb2-83a1-0963dd5e525d-cilium-config-path\") pod \"cilium-hj4l7\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " pod="kube-system/cilium-hj4l7" Jul 8 09:56:00.934481 kubelet[2654]: I0708 09:56:00.934444 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-host-proc-sys-net\") pod \"cilium-hj4l7\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " pod="kube-system/cilium-hj4l7" Jul 8 09:56:00.934502 kubelet[2654]: I0708 09:56:00.934490 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-cilium-run\") pod \"cilium-hj4l7\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " pod="kube-system/cilium-hj4l7" Jul 8 09:56:00.934526 kubelet[2654]: I0708 09:56:00.934506 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-bpf-maps\") pod \"cilium-hj4l7\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " pod="kube-system/cilium-hj4l7" Jul 8 09:56:00.992990 systemd[1]: Created slice kubepods-besteffort-pod69d7e08a_d60d_454d_b3a4_af55f98c37f8.slice - libcontainer container kubepods-besteffort-pod69d7e08a_d60d_454d_b3a4_af55f98c37f8.slice. Jul 8 09:56:01.035563 kubelet[2654]: I0708 09:56:01.035516 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69d7e08a-d60d-454d-b3a4-af55f98c37f8-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-hq6rg\" (UID: \"69d7e08a-d60d-454d-b3a4-af55f98c37f8\") " pod="kube-system/cilium-operator-6c4d7847fc-hq6rg" Jul 8 09:56:01.035919 kubelet[2654]: I0708 09:56:01.035651 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9kmr\" (UniqueName: \"kubernetes.io/projected/69d7e08a-d60d-454d-b3a4-af55f98c37f8-kube-api-access-q9kmr\") pod \"cilium-operator-6c4d7847fc-hq6rg\" (UID: \"69d7e08a-d60d-454d-b3a4-af55f98c37f8\") " pod="kube-system/cilium-operator-6c4d7847fc-hq6rg" Jul 8 09:56:01.205701 containerd[1500]: time="2025-07-08T09:56:01.205587686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pq5gw,Uid:0681f09e-a960-49c2-a3de-10bb52457734,Namespace:kube-system,Attempt:0,}" Jul 8 09:56:01.212384 containerd[1500]: time="2025-07-08T09:56:01.212333916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hj4l7,Uid:74debb58-4bc0-4cb2-83a1-0963dd5e525d,Namespace:kube-system,Attempt:0,}" Jul 8 09:56:01.228232 containerd[1500]: time="2025-07-08T09:56:01.228183305Z" level=info msg="connecting to shim 0391fd13c11060c4b88edf25a6a292e3ea11171e3dcc13bed70395d9462d1b48" address="unix:///run/containerd/s/f716b17480dd615cb3b3e013f277afce5d022627af43547e4129d20db21ba817" namespace=k8s.io protocol=ttrpc version=3 Jul 8 09:56:01.234304 containerd[1500]: time="2025-07-08T09:56:01.234273012Z" level=info msg="connecting to shim 8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6" address="unix:///run/containerd/s/4fd20d196b8667482c63e328f2176ce5146272aedf3ca3f1884f425b7a7a165d" namespace=k8s.io protocol=ttrpc version=3 Jul 8 09:56:01.253304 systemd[1]: Started cri-containerd-0391fd13c11060c4b88edf25a6a292e3ea11171e3dcc13bed70395d9462d1b48.scope - libcontainer container 0391fd13c11060c4b88edf25a6a292e3ea11171e3dcc13bed70395d9462d1b48. Jul 8 09:56:01.256722 systemd[1]: Started cri-containerd-8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6.scope - libcontainer container 8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6. Jul 8 09:56:01.279326 containerd[1500]: time="2025-07-08T09:56:01.279140449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pq5gw,Uid:0681f09e-a960-49c2-a3de-10bb52457734,Namespace:kube-system,Attempt:0,} returns sandbox id \"0391fd13c11060c4b88edf25a6a292e3ea11171e3dcc13bed70395d9462d1b48\"" Jul 8 09:56:01.280789 containerd[1500]: time="2025-07-08T09:56:01.280760896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hj4l7,Uid:74debb58-4bc0-4cb2-83a1-0963dd5e525d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6\"" Jul 8 09:56:01.282616 containerd[1500]: time="2025-07-08T09:56:01.282584024Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 8 09:56:01.284669 containerd[1500]: time="2025-07-08T09:56:01.284628553Z" level=info msg="CreateContainer within sandbox \"0391fd13c11060c4b88edf25a6a292e3ea11171e3dcc13bed70395d9462d1b48\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 8 09:56:01.292967 containerd[1500]: time="2025-07-08T09:56:01.292938630Z" level=info msg="Container bc2b8dc6dff4d9f4469ee2541cc08da0f19e6829c1806bfe3264d231da688ff5: CDI devices from CRI Config.CDIDevices: []" Jul 8 09:56:01.302769 containerd[1500]: time="2025-07-08T09:56:01.302632232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hq6rg,Uid:69d7e08a-d60d-454d-b3a4-af55f98c37f8,Namespace:kube-system,Attempt:0,}" Jul 8 09:56:01.313681 containerd[1500]: time="2025-07-08T09:56:01.313639881Z" level=info msg="CreateContainer within sandbox \"0391fd13c11060c4b88edf25a6a292e3ea11171e3dcc13bed70395d9462d1b48\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bc2b8dc6dff4d9f4469ee2541cc08da0f19e6829c1806bfe3264d231da688ff5\"" Jul 8 09:56:01.315184 containerd[1500]: time="2025-07-08T09:56:01.314955366Z" level=info msg="StartContainer for \"bc2b8dc6dff4d9f4469ee2541cc08da0f19e6829c1806bfe3264d231da688ff5\"" Jul 8 09:56:01.317406 containerd[1500]: time="2025-07-08T09:56:01.317356417Z" level=info msg="connecting to shim bc2b8dc6dff4d9f4469ee2541cc08da0f19e6829c1806bfe3264d231da688ff5" address="unix:///run/containerd/s/f716b17480dd615cb3b3e013f277afce5d022627af43547e4129d20db21ba817" protocol=ttrpc version=3 Jul 8 09:56:01.320798 containerd[1500]: time="2025-07-08T09:56:01.320758432Z" level=info msg="connecting to shim 5714fa548ce07c79d2de836babebefac08e5a5421b1b31a4b490fba6a5236eb6" address="unix:///run/containerd/s/31f5a30278b547a073ce2b4a797779cfa9188c836d848279e977c05359e26057" namespace=k8s.io protocol=ttrpc version=3 Jul 8 09:56:01.338320 systemd[1]: Started cri-containerd-bc2b8dc6dff4d9f4469ee2541cc08da0f19e6829c1806bfe3264d231da688ff5.scope - libcontainer container bc2b8dc6dff4d9f4469ee2541cc08da0f19e6829c1806bfe3264d231da688ff5. Jul 8 09:56:01.341230 systemd[1]: Started cri-containerd-5714fa548ce07c79d2de836babebefac08e5a5421b1b31a4b490fba6a5236eb6.scope - libcontainer container 5714fa548ce07c79d2de836babebefac08e5a5421b1b31a4b490fba6a5236eb6. Jul 8 09:56:01.376589 containerd[1500]: time="2025-07-08T09:56:01.376549757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hq6rg,Uid:69d7e08a-d60d-454d-b3a4-af55f98c37f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"5714fa548ce07c79d2de836babebefac08e5a5421b1b31a4b490fba6a5236eb6\"" Jul 8 09:56:01.378543 containerd[1500]: time="2025-07-08T09:56:01.377081999Z" level=info msg="StartContainer for \"bc2b8dc6dff4d9f4469ee2541cc08da0f19e6829c1806bfe3264d231da688ff5\" returns successfully" Jul 8 09:56:02.620186 kubelet[2654]: I0708 09:56:02.620014 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pq5gw" podStartSLOduration=2.619982288 podStartE2EDuration="2.619982288s" podCreationTimestamp="2025-07-08 09:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 09:56:01.554184217 +0000 UTC m=+7.173893193" watchObservedRunningTime="2025-07-08 09:56:02.619982288 +0000 UTC m=+8.239691264" Jul 8 09:56:04.319044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4014116470.mount: Deactivated successfully. Jul 8 09:56:05.509707 containerd[1500]: time="2025-07-08T09:56:05.509667142Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:56:05.510258 containerd[1500]: time="2025-07-08T09:56:05.509964143Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 8 09:56:05.510981 containerd[1500]: time="2025-07-08T09:56:05.510942186Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:56:05.513078 containerd[1500]: time="2025-07-08T09:56:05.512982833Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.230225328s" Jul 8 09:56:05.513078 containerd[1500]: time="2025-07-08T09:56:05.513014153Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 8 09:56:05.519448 containerd[1500]: time="2025-07-08T09:56:05.519394815Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 8 09:56:05.523349 containerd[1500]: time="2025-07-08T09:56:05.523301748Z" level=info msg="CreateContainer within sandbox \"8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 8 09:56:05.529994 containerd[1500]: time="2025-07-08T09:56:05.529821251Z" level=info msg="Container 76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818: CDI devices from CRI Config.CDIDevices: []" Jul 8 09:56:05.533079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2394658149.mount: Deactivated successfully. Jul 8 09:56:05.535845 containerd[1500]: time="2025-07-08T09:56:05.535792391Z" level=info msg="CreateContainer within sandbox \"8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818\"" Jul 8 09:56:05.536529 containerd[1500]: time="2025-07-08T09:56:05.536280952Z" level=info msg="StartContainer for \"76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818\"" Jul 8 09:56:05.536966 containerd[1500]: time="2025-07-08T09:56:05.536942515Z" level=info msg="connecting to shim 76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818" address="unix:///run/containerd/s/4fd20d196b8667482c63e328f2176ce5146272aedf3ca3f1884f425b7a7a165d" protocol=ttrpc version=3 Jul 8 09:56:05.590407 systemd[1]: Started cri-containerd-76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818.scope - libcontainer container 76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818. Jul 8 09:56:05.624122 containerd[1500]: time="2025-07-08T09:56:05.624080210Z" level=info msg="StartContainer for \"76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818\" returns successfully" Jul 8 09:56:05.667512 systemd[1]: cri-containerd-76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818.scope: Deactivated successfully. Jul 8 09:56:05.667873 systemd[1]: cri-containerd-76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818.scope: Consumed 58ms CPU time, 5.2M memory peak, 3.1M written to disk. Jul 8 09:56:05.682295 containerd[1500]: time="2025-07-08T09:56:05.682257688Z" level=info msg="received exit event container_id:\"76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818\" id:\"76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818\" pid:3082 exited_at:{seconds:1751968565 nanos:681798126}" Jul 8 09:56:05.686260 containerd[1500]: time="2025-07-08T09:56:05.686220381Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818\" id:\"76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818\" pid:3082 exited_at:{seconds:1751968565 nanos:681798126}" Jul 8 09:56:05.719926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818-rootfs.mount: Deactivated successfully. Jul 8 09:56:06.560223 containerd[1500]: time="2025-07-08T09:56:06.560173429Z" level=info msg="CreateContainer within sandbox \"8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 8 09:56:06.569194 containerd[1500]: time="2025-07-08T09:56:06.567360132Z" level=info msg="Container a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15: CDI devices from CRI Config.CDIDevices: []" Jul 8 09:56:06.574655 containerd[1500]: time="2025-07-08T09:56:06.574403274Z" level=info msg="CreateContainer within sandbox \"8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15\"" Jul 8 09:56:06.575857 containerd[1500]: time="2025-07-08T09:56:06.575830478Z" level=info msg="StartContainer for \"a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15\"" Jul 8 09:56:06.576830 containerd[1500]: time="2025-07-08T09:56:06.576768921Z" level=info msg="connecting to shim a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15" address="unix:///run/containerd/s/4fd20d196b8667482c63e328f2176ce5146272aedf3ca3f1884f425b7a7a165d" protocol=ttrpc version=3 Jul 8 09:56:06.626324 systemd[1]: Started cri-containerd-a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15.scope - libcontainer container a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15. Jul 8 09:56:06.658043 containerd[1500]: time="2025-07-08T09:56:06.658002860Z" level=info msg="StartContainer for \"a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15\" returns successfully" Jul 8 09:56:06.672670 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 8 09:56:06.673048 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 8 09:56:06.673475 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 8 09:56:06.675495 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 8 09:56:06.676880 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 8 09:56:06.678001 systemd[1]: cri-containerd-a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15.scope: Deactivated successfully. Jul 8 09:56:06.699936 containerd[1500]: time="2025-07-08T09:56:06.699764833Z" level=info msg="received exit event container_id:\"a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15\" id:\"a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15\" pid:3128 exited_at:{seconds:1751968566 nanos:698347068}" Jul 8 09:56:06.700103 containerd[1500]: time="2025-07-08T09:56:06.700082274Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15\" id:\"a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15\" pid:3128 exited_at:{seconds:1751968566 nanos:698347068}" Jul 8 09:56:06.718468 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 8 09:56:07.511512 containerd[1500]: time="2025-07-08T09:56:07.511461273Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:56:07.512000 containerd[1500]: time="2025-07-08T09:56:07.511974555Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 8 09:56:07.512855 containerd[1500]: time="2025-07-08T09:56:07.512826277Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 09:56:07.518440 containerd[1500]: time="2025-07-08T09:56:07.518406134Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.998983239s" Jul 8 09:56:07.518497 containerd[1500]: time="2025-07-08T09:56:07.518439494Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 8 09:56:07.535691 containerd[1500]: time="2025-07-08T09:56:07.535646185Z" level=info msg="CreateContainer within sandbox \"5714fa548ce07c79d2de836babebefac08e5a5421b1b31a4b490fba6a5236eb6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 8 09:56:07.542509 containerd[1500]: time="2025-07-08T09:56:07.542480205Z" level=info msg="Container 9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b: CDI devices from CRI Config.CDIDevices: []" Jul 8 09:56:07.547214 containerd[1500]: time="2025-07-08T09:56:07.547180620Z" level=info msg="CreateContainer within sandbox \"5714fa548ce07c79d2de836babebefac08e5a5421b1b31a4b490fba6a5236eb6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b\"" Jul 8 09:56:07.548130 containerd[1500]: time="2025-07-08T09:56:07.548073822Z" level=info msg="StartContainer for \"9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b\"" Jul 8 09:56:07.549064 containerd[1500]: time="2025-07-08T09:56:07.548982665Z" level=info msg="connecting to shim 9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b" address="unix:///run/containerd/s/31f5a30278b547a073ce2b4a797779cfa9188c836d848279e977c05359e26057" protocol=ttrpc version=3 Jul 8 09:56:07.568322 systemd[1]: Started cri-containerd-9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b.scope - libcontainer container 9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b. Jul 8 09:56:07.569910 containerd[1500]: time="2025-07-08T09:56:07.569872087Z" level=info msg="CreateContainer within sandbox \"8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 8 09:56:07.571707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15-rootfs.mount: Deactivated successfully. Jul 8 09:56:07.591992 containerd[1500]: time="2025-07-08T09:56:07.591221951Z" level=info msg="Container 1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076: CDI devices from CRI Config.CDIDevices: []" Jul 8 09:56:07.594418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4169903390.mount: Deactivated successfully. Jul 8 09:56:07.599776 containerd[1500]: time="2025-07-08T09:56:07.599641936Z" level=info msg="CreateContainer within sandbox \"8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076\"" Jul 8 09:56:07.600129 containerd[1500]: time="2025-07-08T09:56:07.600076657Z" level=info msg="StartContainer for \"1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076\"" Jul 8 09:56:07.601799 containerd[1500]: time="2025-07-08T09:56:07.601769302Z" level=info msg="connecting to shim 1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076" address="unix:///run/containerd/s/4fd20d196b8667482c63e328f2176ce5146272aedf3ca3f1884f425b7a7a165d" protocol=ttrpc version=3 Jul 8 09:56:07.626704 containerd[1500]: time="2025-07-08T09:56:07.626495576Z" level=info msg="StartContainer for \"9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b\" returns successfully" Jul 8 09:56:07.627316 systemd[1]: Started cri-containerd-1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076.scope - libcontainer container 1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076. Jul 8 09:56:07.671115 containerd[1500]: time="2025-07-08T09:56:07.671019549Z" level=info msg="StartContainer for \"1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076\" returns successfully" Jul 8 09:56:07.699336 systemd[1]: cri-containerd-1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076.scope: Deactivated successfully. Jul 8 09:56:07.700636 containerd[1500]: time="2025-07-08T09:56:07.700505877Z" level=info msg="received exit event container_id:\"1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076\" id:\"1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076\" pid:3223 exited_at:{seconds:1751968567 nanos:700198996}" Jul 8 09:56:07.700636 containerd[1500]: time="2025-07-08T09:56:07.700593797Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076\" id:\"1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076\" pid:3223 exited_at:{seconds:1751968567 nanos:700198996}" Jul 8 09:56:07.719300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076-rootfs.mount: Deactivated successfully. Jul 8 09:56:08.056309 update_engine[1489]: I20250708 09:56:08.056197 1489 update_attempter.cc:509] Updating boot flags... Jul 8 09:56:08.573703 containerd[1500]: time="2025-07-08T09:56:08.573659814Z" level=info msg="CreateContainer within sandbox \"8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 8 09:56:08.589651 containerd[1500]: time="2025-07-08T09:56:08.589598298Z" level=info msg="Container 588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6: CDI devices from CRI Config.CDIDevices: []" Jul 8 09:56:08.591674 kubelet[2654]: I0708 09:56:08.591612 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-hq6rg" podStartSLOduration=2.450427652 podStartE2EDuration="8.591595904s" podCreationTimestamp="2025-07-08 09:56:00 +0000 UTC" firstStartedPulling="2025-07-08 09:56:01.378362165 +0000 UTC m=+6.998071141" lastFinishedPulling="2025-07-08 09:56:07.519530457 +0000 UTC m=+13.139239393" observedRunningTime="2025-07-08 09:56:08.576168901 +0000 UTC m=+14.195877877" watchObservedRunningTime="2025-07-08 09:56:08.591595904 +0000 UTC m=+14.211304880" Jul 8 09:56:08.596844 containerd[1500]: time="2025-07-08T09:56:08.596796918Z" level=info msg="CreateContainer within sandbox \"8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6\"" Jul 8 09:56:08.597576 containerd[1500]: time="2025-07-08T09:56:08.597544840Z" level=info msg="StartContainer for \"588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6\"" Jul 8 09:56:08.598943 containerd[1500]: time="2025-07-08T09:56:08.598680164Z" level=info msg="connecting to shim 588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6" address="unix:///run/containerd/s/4fd20d196b8667482c63e328f2176ce5146272aedf3ca3f1884f425b7a7a165d" protocol=ttrpc version=3 Jul 8 09:56:08.618307 systemd[1]: Started cri-containerd-588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6.scope - libcontainer container 588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6. Jul 8 09:56:08.639714 systemd[1]: cri-containerd-588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6.scope: Deactivated successfully. Jul 8 09:56:08.640521 containerd[1500]: time="2025-07-08T09:56:08.640481640Z" level=info msg="received exit event container_id:\"588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6\" id:\"588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6\" pid:3283 exited_at:{seconds:1751968568 nanos:639944319}" Jul 8 09:56:08.640661 containerd[1500]: time="2025-07-08T09:56:08.640552841Z" level=info msg="TaskExit event in podsandbox handler container_id:\"588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6\" id:\"588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6\" pid:3283 exited_at:{seconds:1751968568 nanos:639944319}" Jul 8 09:56:08.648289 containerd[1500]: time="2025-07-08T09:56:08.648251462Z" level=info msg="StartContainer for \"588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6\" returns successfully" Jul 8 09:56:08.658644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6-rootfs.mount: Deactivated successfully. Jul 8 09:56:09.581560 containerd[1500]: time="2025-07-08T09:56:09.581492810Z" level=info msg="CreateContainer within sandbox \"8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 8 09:56:09.589819 containerd[1500]: time="2025-07-08T09:56:09.589105790Z" level=info msg="Container 670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81: CDI devices from CRI Config.CDIDevices: []" Jul 8 09:56:09.592715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1391918109.mount: Deactivated successfully. Jul 8 09:56:09.597075 containerd[1500]: time="2025-07-08T09:56:09.597023930Z" level=info msg="CreateContainer within sandbox \"8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\"" Jul 8 09:56:09.597546 containerd[1500]: time="2025-07-08T09:56:09.597522772Z" level=info msg="StartContainer for \"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\"" Jul 8 09:56:09.599239 containerd[1500]: time="2025-07-08T09:56:09.599192656Z" level=info msg="connecting to shim 670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81" address="unix:///run/containerd/s/4fd20d196b8667482c63e328f2176ce5146272aedf3ca3f1884f425b7a7a165d" protocol=ttrpc version=3 Jul 8 09:56:09.621328 systemd[1]: Started cri-containerd-670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81.scope - libcontainer container 670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81. Jul 8 09:56:09.652305 containerd[1500]: time="2025-07-08T09:56:09.652267835Z" level=info msg="StartContainer for \"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\" returns successfully" Jul 8 09:56:09.739849 containerd[1500]: time="2025-07-08T09:56:09.739808785Z" level=info msg="TaskExit event in podsandbox handler container_id:\"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\" id:\"48d828731f8d3879643777bfb225d1e90ea09815664b77accef0eee5101886c3\" pid:3352 exited_at:{seconds:1751968569 nanos:738718262}" Jul 8 09:56:09.839655 kubelet[2654]: I0708 09:56:09.838407 2654 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 8 09:56:09.899892 kubelet[2654]: I0708 09:56:09.899854 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtknn\" (UniqueName: \"kubernetes.io/projected/0f3686f8-8199-49c3-a419-37cfdd1bf18b-kube-api-access-jtknn\") pod \"coredns-674b8bbfcf-pcfrn\" (UID: \"0f3686f8-8199-49c3-a419-37cfdd1bf18b\") " pod="kube-system/coredns-674b8bbfcf-pcfrn" Jul 8 09:56:09.900015 kubelet[2654]: I0708 09:56:09.899898 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8jsr\" (UniqueName: \"kubernetes.io/projected/a18fa7b8-d603-4024-9fff-37d145de10ba-kube-api-access-d8jsr\") pod \"coredns-674b8bbfcf-z8trz\" (UID: \"a18fa7b8-d603-4024-9fff-37d145de10ba\") " pod="kube-system/coredns-674b8bbfcf-z8trz" Jul 8 09:56:09.900015 kubelet[2654]: I0708 09:56:09.899914 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f3686f8-8199-49c3-a419-37cfdd1bf18b-config-volume\") pod \"coredns-674b8bbfcf-pcfrn\" (UID: \"0f3686f8-8199-49c3-a419-37cfdd1bf18b\") " pod="kube-system/coredns-674b8bbfcf-pcfrn" Jul 8 09:56:09.900015 kubelet[2654]: I0708 09:56:09.899935 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a18fa7b8-d603-4024-9fff-37d145de10ba-config-volume\") pod \"coredns-674b8bbfcf-z8trz\" (UID: \"a18fa7b8-d603-4024-9fff-37d145de10ba\") " pod="kube-system/coredns-674b8bbfcf-z8trz" Jul 8 09:56:09.911739 systemd[1]: Created slice kubepods-burstable-poda18fa7b8_d603_4024_9fff_37d145de10ba.slice - libcontainer container kubepods-burstable-poda18fa7b8_d603_4024_9fff_37d145de10ba.slice. Jul 8 09:56:09.920344 systemd[1]: Created slice kubepods-burstable-pod0f3686f8_8199_49c3_a419_37cfdd1bf18b.slice - libcontainer container kubepods-burstable-pod0f3686f8_8199_49c3_a419_37cfdd1bf18b.slice. Jul 8 09:56:10.218048 containerd[1500]: time="2025-07-08T09:56:10.217677082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z8trz,Uid:a18fa7b8-d603-4024-9fff-37d145de10ba,Namespace:kube-system,Attempt:0,}" Jul 8 09:56:10.223472 containerd[1500]: time="2025-07-08T09:56:10.223439816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pcfrn,Uid:0f3686f8-8199-49c3-a419-37cfdd1bf18b,Namespace:kube-system,Attempt:0,}" Jul 8 09:56:11.886741 systemd-networkd[1436]: cilium_host: Link UP Jul 8 09:56:11.886854 systemd-networkd[1436]: cilium_net: Link UP Jul 8 09:56:11.886971 systemd-networkd[1436]: cilium_net: Gained carrier Jul 8 09:56:11.887101 systemd-networkd[1436]: cilium_host: Gained carrier Jul 8 09:56:11.939273 systemd-networkd[1436]: cilium_host: Gained IPv6LL Jul 8 09:56:11.976495 systemd-networkd[1436]: cilium_vxlan: Link UP Jul 8 09:56:11.976503 systemd-networkd[1436]: cilium_vxlan: Gained carrier Jul 8 09:56:12.018422 systemd-networkd[1436]: cilium_net: Gained IPv6LL Jul 8 09:56:12.306261 kernel: NET: Registered PF_ALG protocol family Jul 8 09:56:12.867470 systemd-networkd[1436]: lxc_health: Link UP Jul 8 09:56:12.868893 systemd-networkd[1436]: lxc_health: Gained carrier Jul 8 09:56:13.229851 kubelet[2654]: I0708 09:56:13.229461 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hj4l7" podStartSLOduration=8.996309873 podStartE2EDuration="13.229446612s" podCreationTimestamp="2025-07-08 09:56:00 +0000 UTC" firstStartedPulling="2025-07-08 09:56:01.282315983 +0000 UTC m=+6.902024959" lastFinishedPulling="2025-07-08 09:56:05.515452722 +0000 UTC m=+11.135161698" observedRunningTime="2025-07-08 09:56:10.597284254 +0000 UTC m=+16.216993270" watchObservedRunningTime="2025-07-08 09:56:13.229446612 +0000 UTC m=+18.849155588" Jul 8 09:56:13.398035 systemd-networkd[1436]: lxc111f75064e83: Link UP Jul 8 09:56:13.399176 kernel: eth0: renamed from tmp6cffd Jul 8 09:56:13.401756 systemd-networkd[1436]: lxc111f75064e83: Gained carrier Jul 8 09:56:13.403508 systemd-networkd[1436]: lxc22c7343a4c2d: Link UP Jul 8 09:56:13.417232 kernel: eth0: renamed from tmp11023 Jul 8 09:56:13.418071 systemd-networkd[1436]: lxc22c7343a4c2d: Gained carrier Jul 8 09:56:13.736710 systemd-networkd[1436]: cilium_vxlan: Gained IPv6LL Jul 8 09:56:13.991497 systemd-networkd[1436]: lxc_health: Gained IPv6LL Jul 8 09:56:14.759496 systemd-networkd[1436]: lxc111f75064e83: Gained IPv6LL Jul 8 09:56:14.759769 systemd-networkd[1436]: lxc22c7343a4c2d: Gained IPv6LL Jul 8 09:56:16.887036 containerd[1500]: time="2025-07-08T09:56:16.886980050Z" level=info msg="connecting to shim 6cffd91dad3f4f83354190720daef97e52abed244f99f61b43539f6bbba64515" address="unix:///run/containerd/s/f3e370d4d8a8ce3a965304cb82c15a8fdd3dbd34596f29fe14464d7dd7fbff83" namespace=k8s.io protocol=ttrpc version=3 Jul 8 09:56:16.888300 containerd[1500]: time="2025-07-08T09:56:16.888271492Z" level=info msg="connecting to shim 11023f11bf61a31d72cbc52bfa149a182d22be4fb3bb373fc34e915a0b83581f" address="unix:///run/containerd/s/53ea53667088764d2993d2fadc148372fd0083899db05cd43e4881228f8d2a9d" namespace=k8s.io protocol=ttrpc version=3 Jul 8 09:56:16.914293 systemd[1]: Started cri-containerd-11023f11bf61a31d72cbc52bfa149a182d22be4fb3bb373fc34e915a0b83581f.scope - libcontainer container 11023f11bf61a31d72cbc52bfa149a182d22be4fb3bb373fc34e915a0b83581f. Jul 8 09:56:16.915314 systemd[1]: Started cri-containerd-6cffd91dad3f4f83354190720daef97e52abed244f99f61b43539f6bbba64515.scope - libcontainer container 6cffd91dad3f4f83354190720daef97e52abed244f99f61b43539f6bbba64515. Jul 8 09:56:16.925651 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 8 09:56:16.926942 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 8 09:56:16.947978 containerd[1500]: time="2025-07-08T09:56:16.947933912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pcfrn,Uid:0f3686f8-8199-49c3-a419-37cfdd1bf18b,Namespace:kube-system,Attempt:0,} returns sandbox id \"11023f11bf61a31d72cbc52bfa149a182d22be4fb3bb373fc34e915a0b83581f\"" Jul 8 09:56:16.952988 containerd[1500]: time="2025-07-08T09:56:16.952937560Z" level=info msg="CreateContainer within sandbox \"11023f11bf61a31d72cbc52bfa149a182d22be4fb3bb373fc34e915a0b83581f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 8 09:56:16.959306 containerd[1500]: time="2025-07-08T09:56:16.959269330Z" level=info msg="Container 0d91a85c9d062f358fcfb419c606d7bd2dfa369896e2d8c79ae8c829a35df794: CDI devices from CRI Config.CDIDevices: []" Jul 8 09:56:16.965149 containerd[1500]: time="2025-07-08T09:56:16.965100900Z" level=info msg="CreateContainer within sandbox \"11023f11bf61a31d72cbc52bfa149a182d22be4fb3bb373fc34e915a0b83581f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d91a85c9d062f358fcfb419c606d7bd2dfa369896e2d8c79ae8c829a35df794\"" Jul 8 09:56:16.965269 containerd[1500]: time="2025-07-08T09:56:16.965118620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z8trz,Uid:a18fa7b8-d603-4024-9fff-37d145de10ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cffd91dad3f4f83354190720daef97e52abed244f99f61b43539f6bbba64515\"" Jul 8 09:56:16.965743 containerd[1500]: time="2025-07-08T09:56:16.965498421Z" level=info msg="StartContainer for \"0d91a85c9d062f358fcfb419c606d7bd2dfa369896e2d8c79ae8c829a35df794\"" Jul 8 09:56:16.966771 containerd[1500]: time="2025-07-08T09:56:16.966709503Z" level=info msg="connecting to shim 0d91a85c9d062f358fcfb419c606d7bd2dfa369896e2d8c79ae8c829a35df794" address="unix:///run/containerd/s/53ea53667088764d2993d2fadc148372fd0083899db05cd43e4881228f8d2a9d" protocol=ttrpc version=3 Jul 8 09:56:16.971865 containerd[1500]: time="2025-07-08T09:56:16.971832751Z" level=info msg="CreateContainer within sandbox \"6cffd91dad3f4f83354190720daef97e52abed244f99f61b43539f6bbba64515\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 8 09:56:16.983595 containerd[1500]: time="2025-07-08T09:56:16.983552131Z" level=info msg="Container 2e8a9166970b27b0ffed2331947d6fffc5f9a55b1fb6791925df27784ab3d656: CDI devices from CRI Config.CDIDevices: []" Jul 8 09:56:16.988336 systemd[1]: Started cri-containerd-0d91a85c9d062f358fcfb419c606d7bd2dfa369896e2d8c79ae8c829a35df794.scope - libcontainer container 0d91a85c9d062f358fcfb419c606d7bd2dfa369896e2d8c79ae8c829a35df794. Jul 8 09:56:16.989591 containerd[1500]: time="2025-07-08T09:56:16.989557821Z" level=info msg="CreateContainer within sandbox \"6cffd91dad3f4f83354190720daef97e52abed244f99f61b43539f6bbba64515\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2e8a9166970b27b0ffed2331947d6fffc5f9a55b1fb6791925df27784ab3d656\"" Jul 8 09:56:16.990605 containerd[1500]: time="2025-07-08T09:56:16.990561663Z" level=info msg="StartContainer for \"2e8a9166970b27b0ffed2331947d6fffc5f9a55b1fb6791925df27784ab3d656\"" Jul 8 09:56:16.992353 containerd[1500]: time="2025-07-08T09:56:16.992309066Z" level=info msg="connecting to shim 2e8a9166970b27b0ffed2331947d6fffc5f9a55b1fb6791925df27784ab3d656" address="unix:///run/containerd/s/f3e370d4d8a8ce3a965304cb82c15a8fdd3dbd34596f29fe14464d7dd7fbff83" protocol=ttrpc version=3 Jul 8 09:56:17.010297 systemd[1]: Started cri-containerd-2e8a9166970b27b0ffed2331947d6fffc5f9a55b1fb6791925df27784ab3d656.scope - libcontainer container 2e8a9166970b27b0ffed2331947d6fffc5f9a55b1fb6791925df27784ab3d656. Jul 8 09:56:17.021990 containerd[1500]: time="2025-07-08T09:56:17.020720031Z" level=info msg="StartContainer for \"0d91a85c9d062f358fcfb419c606d7bd2dfa369896e2d8c79ae8c829a35df794\" returns successfully" Jul 8 09:56:17.045945 containerd[1500]: time="2025-07-08T09:56:17.043932027Z" level=info msg="StartContainer for \"2e8a9166970b27b0ffed2331947d6fffc5f9a55b1fb6791925df27784ab3d656\" returns successfully" Jul 8 09:56:17.606723 kubelet[2654]: I0708 09:56:17.606663 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pcfrn" podStartSLOduration=17.606650987 podStartE2EDuration="17.606650987s" podCreationTimestamp="2025-07-08 09:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 09:56:17.605124545 +0000 UTC m=+23.224833521" watchObservedRunningTime="2025-07-08 09:56:17.606650987 +0000 UTC m=+23.226359963" Jul 8 09:56:17.615991 kubelet[2654]: I0708 09:56:17.615940 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-z8trz" podStartSLOduration=17.615926362 podStartE2EDuration="17.615926362s" podCreationTimestamp="2025-07-08 09:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 09:56:17.615546921 +0000 UTC m=+23.235255897" watchObservedRunningTime="2025-07-08 09:56:17.615926362 +0000 UTC m=+23.235635338" Jul 8 09:56:20.293172 kubelet[2654]: I0708 09:56:20.292963 2654 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 8 09:56:22.319299 systemd[1]: Started sshd@7-10.0.0.112:22-10.0.0.1:49556.service - OpenSSH per-connection server daemon (10.0.0.1:49556). Jul 8 09:56:22.363328 sshd[3995]: Accepted publickey for core from 10.0.0.1 port 49556 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:56:22.364519 sshd-session[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:56:22.368913 systemd-logind[1484]: New session 8 of user core. Jul 8 09:56:22.378435 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 8 09:56:22.503179 sshd[3999]: Connection closed by 10.0.0.1 port 49556 Jul 8 09:56:22.503462 sshd-session[3995]: pam_unix(sshd:session): session closed for user core Jul 8 09:56:22.506865 systemd[1]: sshd@7-10.0.0.112:22-10.0.0.1:49556.service: Deactivated successfully. Jul 8 09:56:22.508495 systemd[1]: session-8.scope: Deactivated successfully. Jul 8 09:56:22.509131 systemd-logind[1484]: Session 8 logged out. Waiting for processes to exit. Jul 8 09:56:22.510204 systemd-logind[1484]: Removed session 8. Jul 8 09:56:27.515359 systemd[1]: Started sshd@8-10.0.0.112:22-10.0.0.1:38978.service - OpenSSH per-connection server daemon (10.0.0.1:38978). Jul 8 09:56:27.568816 sshd[4014]: Accepted publickey for core from 10.0.0.1 port 38978 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:56:27.570298 sshd-session[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:56:27.574628 systemd-logind[1484]: New session 9 of user core. Jul 8 09:56:27.589363 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 8 09:56:27.701195 sshd[4017]: Connection closed by 10.0.0.1 port 38978 Jul 8 09:56:27.701127 sshd-session[4014]: pam_unix(sshd:session): session closed for user core Jul 8 09:56:27.704720 systemd[1]: sshd@8-10.0.0.112:22-10.0.0.1:38978.service: Deactivated successfully. Jul 8 09:56:27.706375 systemd[1]: session-9.scope: Deactivated successfully. Jul 8 09:56:27.707592 systemd-logind[1484]: Session 9 logged out. Waiting for processes to exit. Jul 8 09:56:27.708542 systemd-logind[1484]: Removed session 9. Jul 8 09:56:32.716474 systemd[1]: Started sshd@9-10.0.0.112:22-10.0.0.1:56250.service - OpenSSH per-connection server daemon (10.0.0.1:56250). Jul 8 09:56:32.772494 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 56250 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:56:32.773695 sshd-session[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:56:32.777563 systemd-logind[1484]: New session 10 of user core. Jul 8 09:56:32.786307 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 8 09:56:32.905210 sshd[4037]: Connection closed by 10.0.0.1 port 56250 Jul 8 09:56:32.905357 sshd-session[4034]: pam_unix(sshd:session): session closed for user core Jul 8 09:56:32.917409 systemd[1]: sshd@9-10.0.0.112:22-10.0.0.1:56250.service: Deactivated successfully. Jul 8 09:56:32.919059 systemd[1]: session-10.scope: Deactivated successfully. Jul 8 09:56:32.921753 systemd-logind[1484]: Session 10 logged out. Waiting for processes to exit. Jul 8 09:56:32.924012 systemd[1]: Started sshd@10-10.0.0.112:22-10.0.0.1:56260.service - OpenSSH per-connection server daemon (10.0.0.1:56260). Jul 8 09:56:32.925228 systemd-logind[1484]: Removed session 10. Jul 8 09:56:32.978078 sshd[4052]: Accepted publickey for core from 10.0.0.1 port 56260 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:56:32.979139 sshd-session[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:56:32.984210 systemd-logind[1484]: New session 11 of user core. Jul 8 09:56:32.994289 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 8 09:56:33.151189 sshd[4055]: Connection closed by 10.0.0.1 port 56260 Jul 8 09:56:33.150902 sshd-session[4052]: pam_unix(sshd:session): session closed for user core Jul 8 09:56:33.164052 systemd[1]: sshd@10-10.0.0.112:22-10.0.0.1:56260.service: Deactivated successfully. Jul 8 09:56:33.168116 systemd[1]: session-11.scope: Deactivated successfully. Jul 8 09:56:33.169741 systemd-logind[1484]: Session 11 logged out. Waiting for processes to exit. Jul 8 09:56:33.172850 systemd[1]: Started sshd@11-10.0.0.112:22-10.0.0.1:56264.service - OpenSSH per-connection server daemon (10.0.0.1:56264). Jul 8 09:56:33.174183 systemd-logind[1484]: Removed session 11. Jul 8 09:56:33.232863 sshd[4067]: Accepted publickey for core from 10.0.0.1 port 56264 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:56:33.234235 sshd-session[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:56:33.238148 systemd-logind[1484]: New session 12 of user core. Jul 8 09:56:33.248284 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 8 09:56:33.360291 sshd[4070]: Connection closed by 10.0.0.1 port 56264 Jul 8 09:56:33.360996 sshd-session[4067]: pam_unix(sshd:session): session closed for user core Jul 8 09:56:33.365174 systemd-logind[1484]: Session 12 logged out. Waiting for processes to exit. Jul 8 09:56:33.365633 systemd[1]: sshd@11-10.0.0.112:22-10.0.0.1:56264.service: Deactivated successfully. Jul 8 09:56:33.367990 systemd[1]: session-12.scope: Deactivated successfully. Jul 8 09:56:33.370009 systemd-logind[1484]: Removed session 12. Jul 8 09:56:38.376010 systemd[1]: Started sshd@12-10.0.0.112:22-10.0.0.1:56268.service - OpenSSH per-connection server daemon (10.0.0.1:56268). Jul 8 09:56:38.416512 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 56268 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:56:38.417671 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:56:38.421968 systemd-logind[1484]: New session 13 of user core. Jul 8 09:56:38.431346 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 8 09:56:38.540384 sshd[4087]: Connection closed by 10.0.0.1 port 56268 Jul 8 09:56:38.540885 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Jul 8 09:56:38.544215 systemd[1]: sshd@12-10.0.0.112:22-10.0.0.1:56268.service: Deactivated successfully. Jul 8 09:56:38.546338 systemd[1]: session-13.scope: Deactivated successfully. Jul 8 09:56:38.547127 systemd-logind[1484]: Session 13 logged out. Waiting for processes to exit. Jul 8 09:56:38.548148 systemd-logind[1484]: Removed session 13. Jul 8 09:56:43.552707 systemd[1]: Started sshd@13-10.0.0.112:22-10.0.0.1:42416.service - OpenSSH per-connection server daemon (10.0.0.1:42416). Jul 8 09:56:43.603918 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 42416 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:56:43.605094 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:56:43.608681 systemd-logind[1484]: New session 14 of user core. Jul 8 09:56:43.623325 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 8 09:56:43.734199 sshd[4103]: Connection closed by 10.0.0.1 port 42416 Jul 8 09:56:43.734135 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Jul 8 09:56:43.746220 systemd[1]: sshd@13-10.0.0.112:22-10.0.0.1:42416.service: Deactivated successfully. Jul 8 09:56:43.747817 systemd[1]: session-14.scope: Deactivated successfully. Jul 8 09:56:43.748534 systemd-logind[1484]: Session 14 logged out. Waiting for processes to exit. Jul 8 09:56:43.750766 systemd[1]: Started sshd@14-10.0.0.112:22-10.0.0.1:42420.service - OpenSSH per-connection server daemon (10.0.0.1:42420). Jul 8 09:56:43.751486 systemd-logind[1484]: Removed session 14. Jul 8 09:56:43.796192 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 42420 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:56:43.797288 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:56:43.800924 systemd-logind[1484]: New session 15 of user core. Jul 8 09:56:43.808297 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 8 09:56:44.163615 sshd[4119]: Connection closed by 10.0.0.1 port 42420 Jul 8 09:56:44.164500 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Jul 8 09:56:44.184460 systemd[1]: sshd@14-10.0.0.112:22-10.0.0.1:42420.service: Deactivated successfully. Jul 8 09:56:44.186495 systemd[1]: session-15.scope: Deactivated successfully. Jul 8 09:56:44.187184 systemd-logind[1484]: Session 15 logged out. Waiting for processes to exit. Jul 8 09:56:44.189384 systemd[1]: Started sshd@15-10.0.0.112:22-10.0.0.1:42424.service - OpenSSH per-connection server daemon (10.0.0.1:42424). Jul 8 09:56:44.189897 systemd-logind[1484]: Removed session 15. Jul 8 09:56:44.240497 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 42424 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:56:44.241502 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:56:44.245069 systemd-logind[1484]: New session 16 of user core. Jul 8 09:56:44.252282 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 8 09:56:44.952235 sshd[4134]: Connection closed by 10.0.0.1 port 42424 Jul 8 09:56:44.952905 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Jul 8 09:56:44.965616 systemd[1]: sshd@15-10.0.0.112:22-10.0.0.1:42424.service: Deactivated successfully. Jul 8 09:56:44.970218 systemd[1]: session-16.scope: Deactivated successfully. Jul 8 09:56:44.971211 systemd-logind[1484]: Session 16 logged out. Waiting for processes to exit. Jul 8 09:56:44.973961 systemd[1]: Started sshd@16-10.0.0.112:22-10.0.0.1:42440.service - OpenSSH per-connection server daemon (10.0.0.1:42440). Jul 8 09:56:44.977707 systemd-logind[1484]: Removed session 16. Jul 8 09:56:45.020863 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 42440 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:56:45.022016 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:56:45.025983 systemd-logind[1484]: New session 17 of user core. Jul 8 09:56:45.034289 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 8 09:56:45.251115 sshd[4156]: Connection closed by 10.0.0.1 port 42440 Jul 8 09:56:45.250864 sshd-session[4153]: pam_unix(sshd:session): session closed for user core Jul 8 09:56:45.260600 systemd[1]: sshd@16-10.0.0.112:22-10.0.0.1:42440.service: Deactivated successfully. Jul 8 09:56:45.263067 systemd[1]: session-17.scope: Deactivated successfully. Jul 8 09:56:45.264782 systemd-logind[1484]: Session 17 logged out. Waiting for processes to exit. Jul 8 09:56:45.266828 systemd[1]: Started sshd@17-10.0.0.112:22-10.0.0.1:42448.service - OpenSSH per-connection server daemon (10.0.0.1:42448). Jul 8 09:56:45.267688 systemd-logind[1484]: Removed session 17. Jul 8 09:56:45.316641 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 42448 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:56:45.317788 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:56:45.322079 systemd-logind[1484]: New session 18 of user core. Jul 8 09:56:45.336306 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 8 09:56:45.438645 sshd[4171]: Connection closed by 10.0.0.1 port 42448 Jul 8 09:56:45.438963 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Jul 8 09:56:45.442364 systemd[1]: sshd@17-10.0.0.112:22-10.0.0.1:42448.service: Deactivated successfully. Jul 8 09:56:45.444466 systemd[1]: session-18.scope: Deactivated successfully. Jul 8 09:56:45.447336 systemd-logind[1484]: Session 18 logged out. Waiting for processes to exit. Jul 8 09:56:45.448398 systemd-logind[1484]: Removed session 18. Jul 8 09:56:50.450450 systemd[1]: Started sshd@18-10.0.0.112:22-10.0.0.1:42454.service - OpenSSH per-connection server daemon (10.0.0.1:42454). Jul 8 09:56:50.511162 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 42454 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:56:50.512356 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:56:50.516034 systemd-logind[1484]: New session 19 of user core. Jul 8 09:56:50.527314 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 8 09:56:50.631070 sshd[4191]: Connection closed by 10.0.0.1 port 42454 Jul 8 09:56:50.631579 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Jul 8 09:56:50.635013 systemd[1]: sshd@18-10.0.0.112:22-10.0.0.1:42454.service: Deactivated successfully. Jul 8 09:56:50.637754 systemd[1]: session-19.scope: Deactivated successfully. Jul 8 09:56:50.638463 systemd-logind[1484]: Session 19 logged out. Waiting for processes to exit. Jul 8 09:56:50.639416 systemd-logind[1484]: Removed session 19. Jul 8 09:56:55.646309 systemd[1]: Started sshd@19-10.0.0.112:22-10.0.0.1:33260.service - OpenSSH per-connection server daemon (10.0.0.1:33260). Jul 8 09:56:55.700415 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 33260 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:56:55.701454 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:56:55.704854 systemd-logind[1484]: New session 20 of user core. Jul 8 09:56:55.719302 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 8 09:56:55.827212 sshd[4210]: Connection closed by 10.0.0.1 port 33260 Jul 8 09:56:55.827710 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Jul 8 09:56:55.831143 systemd[1]: sshd@19-10.0.0.112:22-10.0.0.1:33260.service: Deactivated successfully. Jul 8 09:56:55.833139 systemd[1]: session-20.scope: Deactivated successfully. Jul 8 09:56:55.833969 systemd-logind[1484]: Session 20 logged out. Waiting for processes to exit. Jul 8 09:56:55.835398 systemd-logind[1484]: Removed session 20. Jul 8 09:57:00.842249 systemd[1]: Started sshd@20-10.0.0.112:22-10.0.0.1:33270.service - OpenSSH per-connection server daemon (10.0.0.1:33270). Jul 8 09:57:00.898473 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 33270 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:57:00.899583 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:57:00.903211 systemd-logind[1484]: New session 21 of user core. Jul 8 09:57:00.917368 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 8 09:57:01.023724 sshd[4226]: Connection closed by 10.0.0.1 port 33270 Jul 8 09:57:01.024147 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Jul 8 09:57:01.034410 systemd[1]: sshd@20-10.0.0.112:22-10.0.0.1:33270.service: Deactivated successfully. Jul 8 09:57:01.035867 systemd[1]: session-21.scope: Deactivated successfully. Jul 8 09:57:01.038225 systemd-logind[1484]: Session 21 logged out. Waiting for processes to exit. Jul 8 09:57:01.040132 systemd[1]: Started sshd@21-10.0.0.112:22-10.0.0.1:33276.service - OpenSSH per-connection server daemon (10.0.0.1:33276). Jul 8 09:57:01.040764 systemd-logind[1484]: Removed session 21. Jul 8 09:57:01.090738 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 33276 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:57:01.091774 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:57:01.096076 systemd-logind[1484]: New session 22 of user core. Jul 8 09:57:01.104378 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 8 09:57:03.446283 containerd[1500]: time="2025-07-08T09:57:03.446013791Z" level=info msg="StopContainer for \"9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b\" with timeout 30 (s)" Jul 8 09:57:03.446872 containerd[1500]: time="2025-07-08T09:57:03.446742797Z" level=info msg="Stop container \"9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b\" with signal terminated" Jul 8 09:57:03.462934 systemd[1]: cri-containerd-9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b.scope: Deactivated successfully. Jul 8 09:57:03.467402 containerd[1500]: time="2025-07-08T09:57:03.467350644Z" level=info msg="received exit event container_id:\"9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b\" id:\"9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b\" pid:3190 exited_at:{seconds:1751968623 nanos:466895920}" Jul 8 09:57:03.467739 containerd[1500]: time="2025-07-08T09:57:03.467565646Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b\" id:\"9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b\" pid:3190 exited_at:{seconds:1751968623 nanos:466895920}" Jul 8 09:57:03.484741 containerd[1500]: time="2025-07-08T09:57:03.484665504Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 8 09:57:03.488096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b-rootfs.mount: Deactivated successfully. Jul 8 09:57:03.490029 containerd[1500]: time="2025-07-08T09:57:03.488393695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\" id:\"b15184864e7e48848c46c55ffe20939f4e3b01ac723c3ec14e8f9f6d1065efbe\" pid:4274 exited_at:{seconds:1751968623 nanos:488059772}" Jul 8 09:57:03.491855 containerd[1500]: time="2025-07-08T09:57:03.491822642Z" level=info msg="StopContainer for \"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\" with timeout 2 (s)" Jul 8 09:57:03.492209 containerd[1500]: time="2025-07-08T09:57:03.492142965Z" level=info msg="Stop container \"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\" with signal terminated" Jul 8 09:57:03.499888 systemd-networkd[1436]: lxc_health: Link DOWN Jul 8 09:57:03.499894 systemd-networkd[1436]: lxc_health: Lost carrier Jul 8 09:57:03.501610 containerd[1500]: time="2025-07-08T09:57:03.501550161Z" level=info msg="StopContainer for \"9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b\" returns successfully" Jul 8 09:57:03.503947 containerd[1500]: time="2025-07-08T09:57:03.503894340Z" level=info msg="StopPodSandbox for \"5714fa548ce07c79d2de836babebefac08e5a5421b1b31a4b490fba6a5236eb6\"" Jul 8 09:57:03.510744 containerd[1500]: time="2025-07-08T09:57:03.510693395Z" level=info msg="Container to stop \"9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 8 09:57:03.516096 systemd[1]: cri-containerd-670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81.scope: Deactivated successfully. Jul 8 09:57:03.516595 systemd[1]: cri-containerd-670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81.scope: Consumed 6.453s CPU time, 122M memory peak, 120K read from disk, 12.9M written to disk. Jul 8 09:57:03.517261 containerd[1500]: time="2025-07-08T09:57:03.517212648Z" level=info msg="received exit event container_id:\"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\" id:\"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\" pid:3322 exited_at:{seconds:1751968623 nanos:516929646}" Jul 8 09:57:03.517561 containerd[1500]: time="2025-07-08T09:57:03.517293569Z" level=info msg="TaskExit event in podsandbox handler container_id:\"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\" id:\"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\" pid:3322 exited_at:{seconds:1751968623 nanos:516929646}" Jul 8 09:57:03.517451 systemd[1]: cri-containerd-5714fa548ce07c79d2de836babebefac08e5a5421b1b31a4b490fba6a5236eb6.scope: Deactivated successfully. Jul 8 09:57:03.523911 containerd[1500]: time="2025-07-08T09:57:03.523860862Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5714fa548ce07c79d2de836babebefac08e5a5421b1b31a4b490fba6a5236eb6\" id:\"5714fa548ce07c79d2de836babebefac08e5a5421b1b31a4b490fba6a5236eb6\" pid:2884 exit_status:137 exited_at:{seconds:1751968623 nanos:522668612}" Jul 8 09:57:03.545140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81-rootfs.mount: Deactivated successfully. Jul 8 09:57:03.554699 containerd[1500]: time="2025-07-08T09:57:03.554659032Z" level=info msg="shim disconnected" id=5714fa548ce07c79d2de836babebefac08e5a5421b1b31a4b490fba6a5236eb6 namespace=k8s.io Jul 8 09:57:03.554868 containerd[1500]: time="2025-07-08T09:57:03.554691792Z" level=warning msg="cleaning up after shim disconnected" id=5714fa548ce07c79d2de836babebefac08e5a5421b1b31a4b490fba6a5236eb6 namespace=k8s.io Jul 8 09:57:03.554868 containerd[1500]: time="2025-07-08T09:57:03.554760073Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 8 09:57:03.554737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5714fa548ce07c79d2de836babebefac08e5a5421b1b31a4b490fba6a5236eb6-rootfs.mount: Deactivated successfully. Jul 8 09:57:03.555877 containerd[1500]: time="2025-07-08T09:57:03.555843081Z" level=info msg="TearDown network for sandbox \"5714fa548ce07c79d2de836babebefac08e5a5421b1b31a4b490fba6a5236eb6\" successfully" Jul 8 09:57:03.555877 containerd[1500]: time="2025-07-08T09:57:03.555873042Z" level=info msg="StopPodSandbox for \"5714fa548ce07c79d2de836babebefac08e5a5421b1b31a4b490fba6a5236eb6\" returns successfully" Jul 8 09:57:03.557594 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5714fa548ce07c79d2de836babebefac08e5a5421b1b31a4b490fba6a5236eb6-shm.mount: Deactivated successfully. Jul 8 09:57:03.559826 containerd[1500]: time="2025-07-08T09:57:03.559796513Z" level=info msg="StopContainer for \"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\" returns successfully" Jul 8 09:57:03.560323 containerd[1500]: time="2025-07-08T09:57:03.560268117Z" level=info msg="StopPodSandbox for \"8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6\"" Jul 8 09:57:03.560377 containerd[1500]: time="2025-07-08T09:57:03.560363038Z" level=info msg="Container to stop \"76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 8 09:57:03.560407 containerd[1500]: time="2025-07-08T09:57:03.560375518Z" level=info msg="Container to stop \"1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 8 09:57:03.560407 containerd[1500]: time="2025-07-08T09:57:03.560385438Z" level=info msg="Container to stop \"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 8 09:57:03.560407 containerd[1500]: time="2025-07-08T09:57:03.560393558Z" level=info msg="Container to stop \"a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 8 09:57:03.560407 containerd[1500]: time="2025-07-08T09:57:03.560401398Z" level=info msg="Container to stop \"588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 8 09:57:03.569311 systemd[1]: cri-containerd-8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6.scope: Deactivated successfully. Jul 8 09:57:03.571758 kubelet[2654]: I0708 09:57:03.571729 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69d7e08a-d60d-454d-b3a4-af55f98c37f8-cilium-config-path\") pod \"69d7e08a-d60d-454d-b3a4-af55f98c37f8\" (UID: \"69d7e08a-d60d-454d-b3a4-af55f98c37f8\") " Jul 8 09:57:03.572563 kubelet[2654]: I0708 09:57:03.572199 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9kmr\" (UniqueName: \"kubernetes.io/projected/69d7e08a-d60d-454d-b3a4-af55f98c37f8-kube-api-access-q9kmr\") pod \"69d7e08a-d60d-454d-b3a4-af55f98c37f8\" (UID: \"69d7e08a-d60d-454d-b3a4-af55f98c37f8\") " Jul 8 09:57:03.572899 containerd[1500]: time="2025-07-08T09:57:03.572869619Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6\" id:\"8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6\" pid:2813 exit_status:137 exited_at:{seconds:1751968623 nanos:570864083}" Jul 8 09:57:03.573305 containerd[1500]: time="2025-07-08T09:57:03.573227542Z" level=info msg="received exit event sandbox_id:\"5714fa548ce07c79d2de836babebefac08e5a5421b1b31a4b490fba6a5236eb6\" exit_status:137 exited_at:{seconds:1751968623 nanos:522668612}" Jul 8 09:57:03.592743 systemd[1]: var-lib-kubelet-pods-69d7e08a\x2dd60d\x2d454d\x2db3a4\x2daf55f98c37f8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq9kmr.mount: Deactivated successfully. Jul 8 09:57:03.611217 kubelet[2654]: I0708 09:57:03.611163 2654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69d7e08a-d60d-454d-b3a4-af55f98c37f8-kube-api-access-q9kmr" (OuterVolumeSpecName: "kube-api-access-q9kmr") pod "69d7e08a-d60d-454d-b3a4-af55f98c37f8" (UID: "69d7e08a-d60d-454d-b3a4-af55f98c37f8"). InnerVolumeSpecName "kube-api-access-q9kmr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 8 09:57:03.614608 kubelet[2654]: I0708 09:57:03.614577 2654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69d7e08a-d60d-454d-b3a4-af55f98c37f8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "69d7e08a-d60d-454d-b3a4-af55f98c37f8" (UID: "69d7e08a-d60d-454d-b3a4-af55f98c37f8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 8 09:57:03.637433 containerd[1500]: time="2025-07-08T09:57:03.637387542Z" level=info msg="shim disconnected" id=8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6 namespace=k8s.io Jul 8 09:57:03.638051 containerd[1500]: time="2025-07-08T09:57:03.637426623Z" level=warning msg="cleaning up after shim disconnected" id=8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6 namespace=k8s.io Jul 8 09:57:03.638051 containerd[1500]: time="2025-07-08T09:57:03.637461543Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 8 09:57:03.647453 containerd[1500]: time="2025-07-08T09:57:03.647415304Z" level=info msg="received exit event sandbox_id:\"8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6\" exit_status:137 exited_at:{seconds:1751968623 nanos:570864083}" Jul 8 09:57:03.648334 containerd[1500]: time="2025-07-08T09:57:03.648277591Z" level=info msg="TearDown network for sandbox \"8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6\" successfully" Jul 8 09:57:03.648334 containerd[1500]: time="2025-07-08T09:57:03.648309111Z" level=info msg="StopPodSandbox for \"8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6\" returns successfully" Jul 8 09:57:03.673268 kubelet[2654]: I0708 09:57:03.673216 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-etc-cni-netd\") pod \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " Jul 8 09:57:03.673268 kubelet[2654]: I0708 09:57:03.673264 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74debb58-4bc0-4cb2-83a1-0963dd5e525d-cilium-config-path\") pod \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " Jul 8 09:57:03.673439 kubelet[2654]: I0708 09:57:03.673283 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-bpf-maps\") pod \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " Jul 8 09:57:03.673439 kubelet[2654]: I0708 09:57:03.673297 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-cni-path\") pod \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " Jul 8 09:57:03.673439 kubelet[2654]: I0708 09:57:03.673314 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74debb58-4bc0-4cb2-83a1-0963dd5e525d-hubble-tls\") pod \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " Jul 8 09:57:03.673439 kubelet[2654]: I0708 09:57:03.673333 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2gnn\" (UniqueName: \"kubernetes.io/projected/74debb58-4bc0-4cb2-83a1-0963dd5e525d-kube-api-access-g2gnn\") pod \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " Jul 8 09:57:03.673439 kubelet[2654]: I0708 09:57:03.673347 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-xtables-lock\") pod \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " Jul 8 09:57:03.673439 kubelet[2654]: I0708 09:57:03.673362 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-host-proc-sys-kernel\") pod \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " Jul 8 09:57:03.673566 kubelet[2654]: I0708 09:57:03.673376 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-host-proc-sys-net\") pod \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " Jul 8 09:57:03.673566 kubelet[2654]: I0708 09:57:03.673389 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-cilium-run\") pod \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " Jul 8 09:57:03.673566 kubelet[2654]: I0708 09:57:03.673406 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-cilium-cgroup\") pod \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " Jul 8 09:57:03.673566 kubelet[2654]: I0708 09:57:03.673419 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-lib-modules\") pod \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " Jul 8 09:57:03.673566 kubelet[2654]: I0708 09:57:03.673435 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74debb58-4bc0-4cb2-83a1-0963dd5e525d-clustermesh-secrets\") pod \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " Jul 8 09:57:03.673566 kubelet[2654]: I0708 09:57:03.673449 2654 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-hostproc\") pod \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\" (UID: \"74debb58-4bc0-4cb2-83a1-0963dd5e525d\") " Jul 8 09:57:03.673686 kubelet[2654]: I0708 09:57:03.673502 2654 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q9kmr\" (UniqueName: \"kubernetes.io/projected/69d7e08a-d60d-454d-b3a4-af55f98c37f8-kube-api-access-q9kmr\") on node \"localhost\" DevicePath \"\"" Jul 8 09:57:03.673686 kubelet[2654]: I0708 09:57:03.673512 2654 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69d7e08a-d60d-454d-b3a4-af55f98c37f8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 8 09:57:03.673686 kubelet[2654]: I0708 09:57:03.673560 2654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-hostproc" (OuterVolumeSpecName: "hostproc") pod "74debb58-4bc0-4cb2-83a1-0963dd5e525d" (UID: "74debb58-4bc0-4cb2-83a1-0963dd5e525d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 8 09:57:03.673686 kubelet[2654]: I0708 09:57:03.673589 2654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "74debb58-4bc0-4cb2-83a1-0963dd5e525d" (UID: "74debb58-4bc0-4cb2-83a1-0963dd5e525d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 8 09:57:03.674144 kubelet[2654]: I0708 09:57:03.673846 2654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "74debb58-4bc0-4cb2-83a1-0963dd5e525d" (UID: "74debb58-4bc0-4cb2-83a1-0963dd5e525d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 8 09:57:03.674144 kubelet[2654]: I0708 09:57:03.673906 2654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "74debb58-4bc0-4cb2-83a1-0963dd5e525d" (UID: "74debb58-4bc0-4cb2-83a1-0963dd5e525d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 8 09:57:03.674144 kubelet[2654]: I0708 09:57:03.673932 2654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "74debb58-4bc0-4cb2-83a1-0963dd5e525d" (UID: "74debb58-4bc0-4cb2-83a1-0963dd5e525d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 8 09:57:03.674144 kubelet[2654]: I0708 09:57:03.673939 2654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-cni-path" (OuterVolumeSpecName: "cni-path") pod "74debb58-4bc0-4cb2-83a1-0963dd5e525d" (UID: "74debb58-4bc0-4cb2-83a1-0963dd5e525d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 8 09:57:03.674144 kubelet[2654]: I0708 09:57:03.673954 2654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "74debb58-4bc0-4cb2-83a1-0963dd5e525d" (UID: "74debb58-4bc0-4cb2-83a1-0963dd5e525d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 8 09:57:03.674346 kubelet[2654]: I0708 09:57:03.673970 2654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "74debb58-4bc0-4cb2-83a1-0963dd5e525d" (UID: "74debb58-4bc0-4cb2-83a1-0963dd5e525d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 8 09:57:03.674346 kubelet[2654]: I0708 09:57:03.673988 2654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "74debb58-4bc0-4cb2-83a1-0963dd5e525d" (UID: "74debb58-4bc0-4cb2-83a1-0963dd5e525d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 8 09:57:03.674346 kubelet[2654]: I0708 09:57:03.673989 2654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "74debb58-4bc0-4cb2-83a1-0963dd5e525d" (UID: "74debb58-4bc0-4cb2-83a1-0963dd5e525d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 8 09:57:03.676444 kubelet[2654]: I0708 09:57:03.676405 2654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74debb58-4bc0-4cb2-83a1-0963dd5e525d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "74debb58-4bc0-4cb2-83a1-0963dd5e525d" (UID: "74debb58-4bc0-4cb2-83a1-0963dd5e525d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 8 09:57:03.676444 kubelet[2654]: I0708 09:57:03.676424 2654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74debb58-4bc0-4cb2-83a1-0963dd5e525d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "74debb58-4bc0-4cb2-83a1-0963dd5e525d" (UID: "74debb58-4bc0-4cb2-83a1-0963dd5e525d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 8 09:57:03.676549 kubelet[2654]: I0708 09:57:03.676524 2654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74debb58-4bc0-4cb2-83a1-0963dd5e525d-kube-api-access-g2gnn" (OuterVolumeSpecName: "kube-api-access-g2gnn") pod "74debb58-4bc0-4cb2-83a1-0963dd5e525d" (UID: "74debb58-4bc0-4cb2-83a1-0963dd5e525d"). InnerVolumeSpecName "kube-api-access-g2gnn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 8 09:57:03.676793 kubelet[2654]: I0708 09:57:03.676713 2654 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74debb58-4bc0-4cb2-83a1-0963dd5e525d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "74debb58-4bc0-4cb2-83a1-0963dd5e525d" (UID: "74debb58-4bc0-4cb2-83a1-0963dd5e525d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 8 09:57:03.687271 kubelet[2654]: I0708 09:57:03.686722 2654 scope.go:117] "RemoveContainer" containerID="9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b" Jul 8 09:57:03.687181 systemd[1]: Removed slice kubepods-besteffort-pod69d7e08a_d60d_454d_b3a4_af55f98c37f8.slice - libcontainer container kubepods-besteffort-pod69d7e08a_d60d_454d_b3a4_af55f98c37f8.slice. Jul 8 09:57:03.688808 containerd[1500]: time="2025-07-08T09:57:03.688776279Z" level=info msg="RemoveContainer for \"9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b\"" Jul 8 09:57:03.692039 systemd[1]: Removed slice kubepods-burstable-pod74debb58_4bc0_4cb2_83a1_0963dd5e525d.slice - libcontainer container kubepods-burstable-pod74debb58_4bc0_4cb2_83a1_0963dd5e525d.slice. Jul 8 09:57:03.692132 systemd[1]: kubepods-burstable-pod74debb58_4bc0_4cb2_83a1_0963dd5e525d.slice: Consumed 6.615s CPU time, 122.3M memory peak, 124K read from disk, 16.1M written to disk. Jul 8 09:57:03.695675 containerd[1500]: time="2025-07-08T09:57:03.695633094Z" level=info msg="RemoveContainer for \"9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b\" returns successfully" Jul 8 09:57:03.696848 kubelet[2654]: I0708 09:57:03.696692 2654 scope.go:117] "RemoveContainer" containerID="9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b" Jul 8 09:57:03.708117 containerd[1500]: time="2025-07-08T09:57:03.696942745Z" level=error msg="ContainerStatus for \"9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b\": not found" Jul 8 09:57:03.713506 kubelet[2654]: E0708 09:57:03.713445 2654 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b\": not found" containerID="9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b" Jul 8 09:57:03.716098 kubelet[2654]: I0708 09:57:03.713511 2654 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b"} err="failed to get container status \"9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b\": rpc error: code = NotFound desc = an error occurred when try to find container \"9fdebf0514e783501a901a269cf0e6c8f487149673f9221ef348938058798f0b\": not found" Jul 8 09:57:03.716098 kubelet[2654]: I0708 09:57:03.716066 2654 scope.go:117] "RemoveContainer" containerID="670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81" Jul 8 09:57:03.717959 containerd[1500]: time="2025-07-08T09:57:03.717921795Z" level=info msg="RemoveContainer for \"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\"" Jul 8 09:57:03.722220 containerd[1500]: time="2025-07-08T09:57:03.722189990Z" level=info msg="RemoveContainer for \"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\" returns successfully" Jul 8 09:57:03.722404 kubelet[2654]: I0708 09:57:03.722384 2654 scope.go:117] "RemoveContainer" containerID="588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6" Jul 8 09:57:03.723874 containerd[1500]: time="2025-07-08T09:57:03.723798323Z" level=info msg="RemoveContainer for \"588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6\"" Jul 8 09:57:03.729187 containerd[1500]: time="2025-07-08T09:57:03.728958445Z" level=info msg="RemoveContainer for \"588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6\" returns successfully" Jul 8 09:57:03.729306 kubelet[2654]: I0708 09:57:03.729280 2654 scope.go:117] "RemoveContainer" containerID="1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076" Jul 8 09:57:03.731539 containerd[1500]: time="2025-07-08T09:57:03.731513905Z" level=info msg="RemoveContainer for \"1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076\"" Jul 8 09:57:03.739860 containerd[1500]: time="2025-07-08T09:57:03.739832053Z" level=info msg="RemoveContainer for \"1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076\" returns successfully" Jul 8 09:57:03.740028 kubelet[2654]: I0708 09:57:03.739991 2654 scope.go:117] "RemoveContainer" containerID="a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15" Jul 8 09:57:03.741585 containerd[1500]: time="2025-07-08T09:57:03.741529706Z" level=info msg="RemoveContainer for \"a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15\"" Jul 8 09:57:03.744390 containerd[1500]: time="2025-07-08T09:57:03.744312889Z" level=info msg="RemoveContainer for \"a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15\" returns successfully" Jul 8 09:57:03.744576 kubelet[2654]: I0708 09:57:03.744553 2654 scope.go:117] "RemoveContainer" containerID="76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818" Jul 8 09:57:03.745980 containerd[1500]: time="2025-07-08T09:57:03.745959422Z" level=info msg="RemoveContainer for \"76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818\"" Jul 8 09:57:03.748654 containerd[1500]: time="2025-07-08T09:57:03.748621204Z" level=info msg="RemoveContainer for \"76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818\" returns successfully" Jul 8 09:57:03.748864 kubelet[2654]: I0708 09:57:03.748845 2654 scope.go:117] "RemoveContainer" containerID="670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81" Jul 8 09:57:03.749063 containerd[1500]: time="2025-07-08T09:57:03.749015047Z" level=error msg="ContainerStatus for \"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\": not found" Jul 8 09:57:03.749382 kubelet[2654]: E0708 09:57:03.749279 2654 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\": not found" containerID="670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81" Jul 8 09:57:03.749382 kubelet[2654]: I0708 09:57:03.749322 2654 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81"} err="failed to get container status \"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\": rpc error: code = NotFound desc = an error occurred when try to find container \"670225da8cf9dae12a03d6ccdb6d258f3e05c1799b887b8d9c8edf50a4d8ed81\": not found" Jul 8 09:57:03.749382 kubelet[2654]: I0708 09:57:03.749342 2654 scope.go:117] "RemoveContainer" containerID="588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6" Jul 8 09:57:03.749786 containerd[1500]: time="2025-07-08T09:57:03.749674292Z" level=error msg="ContainerStatus for \"588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6\": not found" Jul 8 09:57:03.749858 kubelet[2654]: E0708 09:57:03.749785 2654 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6\": not found" containerID="588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6" Jul 8 09:57:03.749858 kubelet[2654]: I0708 09:57:03.749808 2654 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6"} err="failed to get container status \"588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6\": rpc error: code = NotFound desc = an error occurred when try to find container \"588a1dac908e4ca707305e9e47691dd5897675274ce82b1f72ae29a37d2b1fe6\": not found" Jul 8 09:57:03.749858 kubelet[2654]: I0708 09:57:03.749822 2654 scope.go:117] "RemoveContainer" containerID="1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076" Jul 8 09:57:03.750001 containerd[1500]: time="2025-07-08T09:57:03.749968215Z" level=error msg="ContainerStatus for \"1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076\": not found" Jul 8 09:57:03.750171 kubelet[2654]: E0708 09:57:03.750102 2654 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076\": not found" containerID="1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076" Jul 8 09:57:03.750171 kubelet[2654]: I0708 09:57:03.750132 2654 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076"} err="failed to get container status \"1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076\": rpc error: code = NotFound desc = an error occurred when try to find container \"1186c16eb0a390c1f49526ad2b6c9611385163677a71d19953f09b1227723076\": not found" Jul 8 09:57:03.750171 kubelet[2654]: I0708 09:57:03.750147 2654 scope.go:117] "RemoveContainer" containerID="a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15" Jul 8 09:57:03.750498 kubelet[2654]: E0708 09:57:03.750464 2654 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15\": not found" containerID="a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15" Jul 8 09:57:03.750498 kubelet[2654]: I0708 09:57:03.750480 2654 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15"} err="failed to get container status \"a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15\": not found" Jul 8 09:57:03.750498 kubelet[2654]: I0708 09:57:03.750492 2654 scope.go:117] "RemoveContainer" containerID="76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818" Jul 8 09:57:03.750789 containerd[1500]: time="2025-07-08T09:57:03.750361018Z" level=error msg="ContainerStatus for \"a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4dced53ef287aaccbf9c228d3b0f38f8d1e85e57d1f69910f1ea2f848741f15\": not found" Jul 8 09:57:03.750789 containerd[1500]: time="2025-07-08T09:57:03.750649460Z" level=error msg="ContainerStatus for \"76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818\": not found" Jul 8 09:57:03.750965 kubelet[2654]: E0708 09:57:03.750913 2654 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818\": not found" containerID="76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818" Jul 8 09:57:03.750965 kubelet[2654]: I0708 09:57:03.750941 2654 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818"} err="failed to get container status \"76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818\": rpc error: code = NotFound desc = an error occurred when try to find container \"76f559c86c46855900c96f9574f32d1e6be6b8b9ff2860ddb3d5904c90a54818\": not found" Jul 8 09:57:03.774327 kubelet[2654]: I0708 09:57:03.774297 2654 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 8 09:57:03.774506 kubelet[2654]: I0708 09:57:03.774443 2654 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 8 09:57:03.774506 kubelet[2654]: I0708 09:57:03.774460 2654 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74debb58-4bc0-4cb2-83a1-0963dd5e525d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 8 09:57:03.774506 kubelet[2654]: I0708 09:57:03.774469 2654 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 8 09:57:03.774506 kubelet[2654]: I0708 09:57:03.774477 2654 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 8 09:57:03.774506 kubelet[2654]: I0708 09:57:03.774488 2654 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74debb58-4bc0-4cb2-83a1-0963dd5e525d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 8 09:57:03.774735 kubelet[2654]: I0708 09:57:03.774496 2654 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g2gnn\" (UniqueName: \"kubernetes.io/projected/74debb58-4bc0-4cb2-83a1-0963dd5e525d-kube-api-access-g2gnn\") on node \"localhost\" DevicePath \"\"" Jul 8 09:57:03.774735 kubelet[2654]: I0708 09:57:03.774657 2654 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 8 09:57:03.774735 kubelet[2654]: I0708 09:57:03.774668 2654 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 8 09:57:03.774735 kubelet[2654]: I0708 09:57:03.774675 2654 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 8 09:57:03.774735 kubelet[2654]: I0708 09:57:03.774683 2654 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 8 09:57:03.774735 kubelet[2654]: I0708 09:57:03.774690 2654 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 8 09:57:03.774735 kubelet[2654]: I0708 09:57:03.774698 2654 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74debb58-4bc0-4cb2-83a1-0963dd5e525d-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 8 09:57:03.774735 kubelet[2654]: I0708 09:57:03.774717 2654 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74debb58-4bc0-4cb2-83a1-0963dd5e525d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 8 09:57:04.488217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6-rootfs.mount: Deactivated successfully. Jul 8 09:57:04.488329 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ddfd9c7caf97b687d94053be198f894b80f7dfe72c46178dc9f642ea150b4e6-shm.mount: Deactivated successfully. Jul 8 09:57:04.488385 systemd[1]: var-lib-kubelet-pods-74debb58\x2d4bc0\x2d4cb2\x2d83a1\x2d0963dd5e525d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg2gnn.mount: Deactivated successfully. Jul 8 09:57:04.488440 systemd[1]: var-lib-kubelet-pods-74debb58\x2d4bc0\x2d4cb2\x2d83a1\x2d0963dd5e525d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 8 09:57:04.488495 systemd[1]: var-lib-kubelet-pods-74debb58\x2d4bc0\x2d4cb2\x2d83a1\x2d0963dd5e525d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 8 09:57:04.506631 kubelet[2654]: I0708 09:57:04.506581 2654 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69d7e08a-d60d-454d-b3a4-af55f98c37f8" path="/var/lib/kubelet/pods/69d7e08a-d60d-454d-b3a4-af55f98c37f8/volumes" Jul 8 09:57:04.506965 kubelet[2654]: I0708 09:57:04.506946 2654 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74debb58-4bc0-4cb2-83a1-0963dd5e525d" path="/var/lib/kubelet/pods/74debb58-4bc0-4cb2-83a1-0963dd5e525d/volumes" Jul 8 09:57:04.555424 kubelet[2654]: E0708 09:57:04.555376 2654 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 8 09:57:05.411187 sshd[4243]: Connection closed by 10.0.0.1 port 33276 Jul 8 09:57:05.411743 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Jul 8 09:57:05.422311 systemd[1]: sshd@21-10.0.0.112:22-10.0.0.1:33276.service: Deactivated successfully. Jul 8 09:57:05.423906 systemd[1]: session-22.scope: Deactivated successfully. Jul 8 09:57:05.424092 systemd[1]: session-22.scope: Consumed 1.687s CPU time, 26.7M memory peak. Jul 8 09:57:05.424673 systemd-logind[1484]: Session 22 logged out. Waiting for processes to exit. Jul 8 09:57:05.427021 systemd[1]: Started sshd@22-10.0.0.112:22-10.0.0.1:58466.service - OpenSSH per-connection server daemon (10.0.0.1:58466). Jul 8 09:57:05.427758 systemd-logind[1484]: Removed session 22. Jul 8 09:57:05.484378 sshd[4403]: Accepted publickey for core from 10.0.0.1 port 58466 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:57:05.485401 sshd-session[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:57:05.489085 systemd-logind[1484]: New session 23 of user core. Jul 8 09:57:05.495292 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 8 09:57:06.085200 sshd[4406]: Connection closed by 10.0.0.1 port 58466 Jul 8 09:57:06.085979 sshd-session[4403]: pam_unix(sshd:session): session closed for user core Jul 8 09:57:06.088696 kubelet[2654]: I0708 09:57:06.088649 2654 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-08T09:57:06Z","lastTransitionTime":"2025-07-08T09:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 8 09:57:06.099096 systemd[1]: Started sshd@23-10.0.0.112:22-10.0.0.1:58474.service - OpenSSH per-connection server daemon (10.0.0.1:58474). Jul 8 09:57:06.099559 systemd[1]: sshd@22-10.0.0.112:22-10.0.0.1:58466.service: Deactivated successfully. Jul 8 09:57:06.101109 systemd[1]: session-23.scope: Deactivated successfully. Jul 8 09:57:06.107046 systemd-logind[1484]: Session 23 logged out. Waiting for processes to exit. Jul 8 09:57:06.110393 systemd-logind[1484]: Removed session 23. Jul 8 09:57:06.130079 systemd[1]: Created slice kubepods-burstable-podc66bd06f_8344_41a5_92d5_1d537b523718.slice - libcontainer container kubepods-burstable-podc66bd06f_8344_41a5_92d5_1d537b523718.slice. Jul 8 09:57:06.163050 sshd[4415]: Accepted publickey for core from 10.0.0.1 port 58474 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:57:06.164369 sshd-session[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:57:06.168181 systemd-logind[1484]: New session 24 of user core. Jul 8 09:57:06.179309 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 8 09:57:06.186587 kubelet[2654]: I0708 09:57:06.186550 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c66bd06f-8344-41a5-92d5-1d537b523718-clustermesh-secrets\") pod \"cilium-fvmcm\" (UID: \"c66bd06f-8344-41a5-92d5-1d537b523718\") " pod="kube-system/cilium-fvmcm" Jul 8 09:57:06.186676 kubelet[2654]: I0708 09:57:06.186589 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkl62\" (UniqueName: \"kubernetes.io/projected/c66bd06f-8344-41a5-92d5-1d537b523718-kube-api-access-dkl62\") pod \"cilium-fvmcm\" (UID: \"c66bd06f-8344-41a5-92d5-1d537b523718\") " pod="kube-system/cilium-fvmcm" Jul 8 09:57:06.186676 kubelet[2654]: I0708 09:57:06.186608 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c66bd06f-8344-41a5-92d5-1d537b523718-cilium-config-path\") pod \"cilium-fvmcm\" (UID: \"c66bd06f-8344-41a5-92d5-1d537b523718\") " pod="kube-system/cilium-fvmcm" Jul 8 09:57:06.186676 kubelet[2654]: I0708 09:57:06.186624 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c66bd06f-8344-41a5-92d5-1d537b523718-host-proc-sys-net\") pod \"cilium-fvmcm\" (UID: \"c66bd06f-8344-41a5-92d5-1d537b523718\") " pod="kube-system/cilium-fvmcm" Jul 8 09:57:06.186676 kubelet[2654]: I0708 09:57:06.186641 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c66bd06f-8344-41a5-92d5-1d537b523718-hubble-tls\") pod \"cilium-fvmcm\" (UID: \"c66bd06f-8344-41a5-92d5-1d537b523718\") " pod="kube-system/cilium-fvmcm" Jul 8 09:57:06.186676 kubelet[2654]: I0708 09:57:06.186655 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c66bd06f-8344-41a5-92d5-1d537b523718-cni-path\") pod \"cilium-fvmcm\" (UID: \"c66bd06f-8344-41a5-92d5-1d537b523718\") " pod="kube-system/cilium-fvmcm" Jul 8 09:57:06.186676 kubelet[2654]: I0708 09:57:06.186669 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c66bd06f-8344-41a5-92d5-1d537b523718-cilium-cgroup\") pod \"cilium-fvmcm\" (UID: \"c66bd06f-8344-41a5-92d5-1d537b523718\") " pod="kube-system/cilium-fvmcm" Jul 8 09:57:06.186971 kubelet[2654]: I0708 09:57:06.186683 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c66bd06f-8344-41a5-92d5-1d537b523718-etc-cni-netd\") pod \"cilium-fvmcm\" (UID: \"c66bd06f-8344-41a5-92d5-1d537b523718\") " pod="kube-system/cilium-fvmcm" Jul 8 09:57:06.186971 kubelet[2654]: I0708 09:57:06.186700 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c66bd06f-8344-41a5-92d5-1d537b523718-xtables-lock\") pod \"cilium-fvmcm\" (UID: \"c66bd06f-8344-41a5-92d5-1d537b523718\") " pod="kube-system/cilium-fvmcm" Jul 8 09:57:06.186971 kubelet[2654]: I0708 09:57:06.186725 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c66bd06f-8344-41a5-92d5-1d537b523718-host-proc-sys-kernel\") pod \"cilium-fvmcm\" (UID: \"c66bd06f-8344-41a5-92d5-1d537b523718\") " pod="kube-system/cilium-fvmcm" Jul 8 09:57:06.186971 kubelet[2654]: I0708 09:57:06.186745 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c66bd06f-8344-41a5-92d5-1d537b523718-cilium-run\") pod \"cilium-fvmcm\" (UID: \"c66bd06f-8344-41a5-92d5-1d537b523718\") " pod="kube-system/cilium-fvmcm" Jul 8 09:57:06.186971 kubelet[2654]: I0708 09:57:06.186760 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c66bd06f-8344-41a5-92d5-1d537b523718-hostproc\") pod \"cilium-fvmcm\" (UID: \"c66bd06f-8344-41a5-92d5-1d537b523718\") " pod="kube-system/cilium-fvmcm" Jul 8 09:57:06.186971 kubelet[2654]: I0708 09:57:06.186772 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c66bd06f-8344-41a5-92d5-1d537b523718-lib-modules\") pod \"cilium-fvmcm\" (UID: \"c66bd06f-8344-41a5-92d5-1d537b523718\") " pod="kube-system/cilium-fvmcm" Jul 8 09:57:06.187098 kubelet[2654]: I0708 09:57:06.186786 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c66bd06f-8344-41a5-92d5-1d537b523718-bpf-maps\") pod \"cilium-fvmcm\" (UID: \"c66bd06f-8344-41a5-92d5-1d537b523718\") " pod="kube-system/cilium-fvmcm" Jul 8 09:57:06.187098 kubelet[2654]: I0708 09:57:06.186800 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c66bd06f-8344-41a5-92d5-1d537b523718-cilium-ipsec-secrets\") pod \"cilium-fvmcm\" (UID: \"c66bd06f-8344-41a5-92d5-1d537b523718\") " pod="kube-system/cilium-fvmcm" Jul 8 09:57:06.228345 sshd[4421]: Connection closed by 10.0.0.1 port 58474 Jul 8 09:57:06.228779 sshd-session[4415]: pam_unix(sshd:session): session closed for user core Jul 8 09:57:06.245746 systemd[1]: sshd@23-10.0.0.112:22-10.0.0.1:58474.service: Deactivated successfully. Jul 8 09:57:06.247300 systemd[1]: session-24.scope: Deactivated successfully. Jul 8 09:57:06.248126 systemd-logind[1484]: Session 24 logged out. Waiting for processes to exit. Jul 8 09:57:06.250404 systemd[1]: Started sshd@24-10.0.0.112:22-10.0.0.1:58482.service - OpenSSH per-connection server daemon (10.0.0.1:58482). Jul 8 09:57:06.251294 systemd-logind[1484]: Removed session 24. Jul 8 09:57:06.317690 sshd[4428]: Accepted publickey for core from 10.0.0.1 port 58482 ssh2: RSA SHA256:QPgzB0uIpaUhwXgs0bhurn/sDuZR1LBudqihdUXwAKk Jul 8 09:57:06.318882 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 09:57:06.322388 systemd-logind[1484]: New session 25 of user core. Jul 8 09:57:06.337336 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 8 09:57:06.437905 containerd[1500]: time="2025-07-08T09:57:06.437860459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fvmcm,Uid:c66bd06f-8344-41a5-92d5-1d537b523718,Namespace:kube-system,Attempt:0,}" Jul 8 09:57:06.458632 containerd[1500]: time="2025-07-08T09:57:06.458589294Z" level=info msg="connecting to shim d64d5c128e816630e29404432ac778eaa1c4fb10c970475560e68d1ec3f9a5ce" address="unix:///run/containerd/s/50109b6f23711ba7293e280764a939edf42b853c23bfccdd6712976a02c7e2ec" namespace=k8s.io protocol=ttrpc version=3 Jul 8 09:57:06.481325 systemd[1]: Started cri-containerd-d64d5c128e816630e29404432ac778eaa1c4fb10c970475560e68d1ec3f9a5ce.scope - libcontainer container d64d5c128e816630e29404432ac778eaa1c4fb10c970475560e68d1ec3f9a5ce. Jul 8 09:57:06.501712 containerd[1500]: time="2025-07-08T09:57:06.501675575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fvmcm,Uid:c66bd06f-8344-41a5-92d5-1d537b523718,Namespace:kube-system,Attempt:0,} returns sandbox id \"d64d5c128e816630e29404432ac778eaa1c4fb10c970475560e68d1ec3f9a5ce\"" Jul 8 09:57:06.507257 containerd[1500]: time="2025-07-08T09:57:06.506748893Z" level=info msg="CreateContainer within sandbox \"d64d5c128e816630e29404432ac778eaa1c4fb10c970475560e68d1ec3f9a5ce\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 8 09:57:06.513211 containerd[1500]: time="2025-07-08T09:57:06.512661137Z" level=info msg="Container 6ce55d13a83110c2d255d730fa174db55cf031145f92e33dd9d83ae6b8bb6266: CDI devices from CRI Config.CDIDevices: []" Jul 8 09:57:06.517541 containerd[1500]: time="2025-07-08T09:57:06.517505093Z" level=info msg="CreateContainer within sandbox \"d64d5c128e816630e29404432ac778eaa1c4fb10c970475560e68d1ec3f9a5ce\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6ce55d13a83110c2d255d730fa174db55cf031145f92e33dd9d83ae6b8bb6266\"" Jul 8 09:57:06.518389 containerd[1500]: time="2025-07-08T09:57:06.518355500Z" level=info msg="StartContainer for \"6ce55d13a83110c2d255d730fa174db55cf031145f92e33dd9d83ae6b8bb6266\"" Jul 8 09:57:06.519241 containerd[1500]: time="2025-07-08T09:57:06.519096345Z" level=info msg="connecting to shim 6ce55d13a83110c2d255d730fa174db55cf031145f92e33dd9d83ae6b8bb6266" address="unix:///run/containerd/s/50109b6f23711ba7293e280764a939edf42b853c23bfccdd6712976a02c7e2ec" protocol=ttrpc version=3 Jul 8 09:57:06.542349 systemd[1]: Started cri-containerd-6ce55d13a83110c2d255d730fa174db55cf031145f92e33dd9d83ae6b8bb6266.scope - libcontainer container 6ce55d13a83110c2d255d730fa174db55cf031145f92e33dd9d83ae6b8bb6266. Jul 8 09:57:06.567739 containerd[1500]: time="2025-07-08T09:57:06.567695348Z" level=info msg="StartContainer for \"6ce55d13a83110c2d255d730fa174db55cf031145f92e33dd9d83ae6b8bb6266\" returns successfully" Jul 8 09:57:06.581691 systemd[1]: cri-containerd-6ce55d13a83110c2d255d730fa174db55cf031145f92e33dd9d83ae6b8bb6266.scope: Deactivated successfully. Jul 8 09:57:06.583053 containerd[1500]: time="2025-07-08T09:57:06.583020102Z" level=info msg="received exit event container_id:\"6ce55d13a83110c2d255d730fa174db55cf031145f92e33dd9d83ae6b8bb6266\" id:\"6ce55d13a83110c2d255d730fa174db55cf031145f92e33dd9d83ae6b8bb6266\" pid:4501 exited_at:{seconds:1751968626 nanos:582801580}" Jul 8 09:57:06.583527 containerd[1500]: time="2025-07-08T09:57:06.583202703Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ce55d13a83110c2d255d730fa174db55cf031145f92e33dd9d83ae6b8bb6266\" id:\"6ce55d13a83110c2d255d730fa174db55cf031145f92e33dd9d83ae6b8bb6266\" pid:4501 exited_at:{seconds:1751968626 nanos:582801580}" Jul 8 09:57:06.698653 containerd[1500]: time="2025-07-08T09:57:06.698497483Z" level=info msg="CreateContainer within sandbox \"d64d5c128e816630e29404432ac778eaa1c4fb10c970475560e68d1ec3f9a5ce\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 8 09:57:06.706634 containerd[1500]: time="2025-07-08T09:57:06.706594504Z" level=info msg="Container 8e6fb9b67f647aba31af334140bdadd0c9a03d1f8c41cb1d5b528f0f69b50198: CDI devices from CRI Config.CDIDevices: []" Jul 8 09:57:06.711728 containerd[1500]: time="2025-07-08T09:57:06.711583861Z" level=info msg="CreateContainer within sandbox \"d64d5c128e816630e29404432ac778eaa1c4fb10c970475560e68d1ec3f9a5ce\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8e6fb9b67f647aba31af334140bdadd0c9a03d1f8c41cb1d5b528f0f69b50198\"" Jul 8 09:57:06.712743 containerd[1500]: time="2025-07-08T09:57:06.712503028Z" level=info msg="StartContainer for \"8e6fb9b67f647aba31af334140bdadd0c9a03d1f8c41cb1d5b528f0f69b50198\"" Jul 8 09:57:06.714567 containerd[1500]: time="2025-07-08T09:57:06.714535483Z" level=info msg="connecting to shim 8e6fb9b67f647aba31af334140bdadd0c9a03d1f8c41cb1d5b528f0f69b50198" address="unix:///run/containerd/s/50109b6f23711ba7293e280764a939edf42b853c23bfccdd6712976a02c7e2ec" protocol=ttrpc version=3 Jul 8 09:57:06.735313 systemd[1]: Started cri-containerd-8e6fb9b67f647aba31af334140bdadd0c9a03d1f8c41cb1d5b528f0f69b50198.scope - libcontainer container 8e6fb9b67f647aba31af334140bdadd0c9a03d1f8c41cb1d5b528f0f69b50198. Jul 8 09:57:06.761441 containerd[1500]: time="2025-07-08T09:57:06.761344632Z" level=info msg="StartContainer for \"8e6fb9b67f647aba31af334140bdadd0c9a03d1f8c41cb1d5b528f0f69b50198\" returns successfully" Jul 8 09:57:06.766848 systemd[1]: cri-containerd-8e6fb9b67f647aba31af334140bdadd0c9a03d1f8c41cb1d5b528f0f69b50198.scope: Deactivated successfully. Jul 8 09:57:06.768396 containerd[1500]: time="2025-07-08T09:57:06.768358964Z" level=info msg="received exit event container_id:\"8e6fb9b67f647aba31af334140bdadd0c9a03d1f8c41cb1d5b528f0f69b50198\" id:\"8e6fb9b67f647aba31af334140bdadd0c9a03d1f8c41cb1d5b528f0f69b50198\" pid:4546 exited_at:{seconds:1751968626 nanos:768173203}" Jul 8 09:57:06.769067 containerd[1500]: time="2025-07-08T09:57:06.769033569Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e6fb9b67f647aba31af334140bdadd0c9a03d1f8c41cb1d5b528f0f69b50198\" id:\"8e6fb9b67f647aba31af334140bdadd0c9a03d1f8c41cb1d5b528f0f69b50198\" pid:4546 exited_at:{seconds:1751968626 nanos:768173203}" Jul 8 09:57:07.702269 containerd[1500]: time="2025-07-08T09:57:07.702203707Z" level=info msg="CreateContainer within sandbox \"d64d5c128e816630e29404432ac778eaa1c4fb10c970475560e68d1ec3f9a5ce\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 8 09:57:07.715820 containerd[1500]: time="2025-07-08T09:57:07.715701965Z" level=info msg="Container 01a26174f24900592f264ad5e906ed86d33fd5b4de2c7258dbc4a1158f8a6d10: CDI devices from CRI Config.CDIDevices: []" Jul 8 09:57:07.722517 containerd[1500]: time="2025-07-08T09:57:07.722432134Z" level=info msg="CreateContainer within sandbox \"d64d5c128e816630e29404432ac778eaa1c4fb10c970475560e68d1ec3f9a5ce\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"01a26174f24900592f264ad5e906ed86d33fd5b4de2c7258dbc4a1158f8a6d10\"" Jul 8 09:57:07.724280 containerd[1500]: time="2025-07-08T09:57:07.724010905Z" level=info msg="StartContainer for \"01a26174f24900592f264ad5e906ed86d33fd5b4de2c7258dbc4a1158f8a6d10\"" Jul 8 09:57:07.725614 containerd[1500]: time="2025-07-08T09:57:07.725578597Z" level=info msg="connecting to shim 01a26174f24900592f264ad5e906ed86d33fd5b4de2c7258dbc4a1158f8a6d10" address="unix:///run/containerd/s/50109b6f23711ba7293e280764a939edf42b853c23bfccdd6712976a02c7e2ec" protocol=ttrpc version=3 Jul 8 09:57:07.752329 systemd[1]: Started cri-containerd-01a26174f24900592f264ad5e906ed86d33fd5b4de2c7258dbc4a1158f8a6d10.scope - libcontainer container 01a26174f24900592f264ad5e906ed86d33fd5b4de2c7258dbc4a1158f8a6d10. Jul 8 09:57:07.784387 systemd[1]: cri-containerd-01a26174f24900592f264ad5e906ed86d33fd5b4de2c7258dbc4a1158f8a6d10.scope: Deactivated successfully. Jul 8 09:57:07.786261 containerd[1500]: time="2025-07-08T09:57:07.786217277Z" level=info msg="StartContainer for \"01a26174f24900592f264ad5e906ed86d33fd5b4de2c7258dbc4a1158f8a6d10\" returns successfully" Jul 8 09:57:07.788348 containerd[1500]: time="2025-07-08T09:57:07.788242771Z" level=info msg="received exit event container_id:\"01a26174f24900592f264ad5e906ed86d33fd5b4de2c7258dbc4a1158f8a6d10\" id:\"01a26174f24900592f264ad5e906ed86d33fd5b4de2c7258dbc4a1158f8a6d10\" pid:4592 exited_at:{seconds:1751968627 nanos:788027450}" Jul 8 09:57:07.788543 containerd[1500]: time="2025-07-08T09:57:07.788523533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"01a26174f24900592f264ad5e906ed86d33fd5b4de2c7258dbc4a1158f8a6d10\" id:\"01a26174f24900592f264ad5e906ed86d33fd5b4de2c7258dbc4a1158f8a6d10\" pid:4592 exited_at:{seconds:1751968627 nanos:788027450}" Jul 8 09:57:08.707982 containerd[1500]: time="2025-07-08T09:57:08.707936146Z" level=info msg="CreateContainer within sandbox \"d64d5c128e816630e29404432ac778eaa1c4fb10c970475560e68d1ec3f9a5ce\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 8 09:57:08.715079 containerd[1500]: time="2025-07-08T09:57:08.715041716Z" level=info msg="Container 5b009908e9b3f58f0eb23e644ea9e9b3b3d6222e729b48ca27417c7d3b58d0cb: CDI devices from CRI Config.CDIDevices: []" Jul 8 09:57:08.721650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4244761034.mount: Deactivated successfully. Jul 8 09:57:08.723910 containerd[1500]: time="2025-07-08T09:57:08.723862499Z" level=info msg="CreateContainer within sandbox \"d64d5c128e816630e29404432ac778eaa1c4fb10c970475560e68d1ec3f9a5ce\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5b009908e9b3f58f0eb23e644ea9e9b3b3d6222e729b48ca27417c7d3b58d0cb\"" Jul 8 09:57:08.725097 containerd[1500]: time="2025-07-08T09:57:08.725048587Z" level=info msg="StartContainer for \"5b009908e9b3f58f0eb23e644ea9e9b3b3d6222e729b48ca27417c7d3b58d0cb\"" Jul 8 09:57:08.726175 containerd[1500]: time="2025-07-08T09:57:08.726108515Z" level=info msg="connecting to shim 5b009908e9b3f58f0eb23e644ea9e9b3b3d6222e729b48ca27417c7d3b58d0cb" address="unix:///run/containerd/s/50109b6f23711ba7293e280764a939edf42b853c23bfccdd6712976a02c7e2ec" protocol=ttrpc version=3 Jul 8 09:57:08.748382 systemd[1]: Started cri-containerd-5b009908e9b3f58f0eb23e644ea9e9b3b3d6222e729b48ca27417c7d3b58d0cb.scope - libcontainer container 5b009908e9b3f58f0eb23e644ea9e9b3b3d6222e729b48ca27417c7d3b58d0cb. Jul 8 09:57:08.769040 systemd[1]: cri-containerd-5b009908e9b3f58f0eb23e644ea9e9b3b3d6222e729b48ca27417c7d3b58d0cb.scope: Deactivated successfully. Jul 8 09:57:08.772374 containerd[1500]: time="2025-07-08T09:57:08.772216720Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5b009908e9b3f58f0eb23e644ea9e9b3b3d6222e729b48ca27417c7d3b58d0cb\" id:\"5b009908e9b3f58f0eb23e644ea9e9b3b3d6222e729b48ca27417c7d3b58d0cb\" pid:4630 exited_at:{seconds:1751968628 nanos:770020945}" Jul 8 09:57:08.772374 containerd[1500]: time="2025-07-08T09:57:08.772243640Z" level=info msg="received exit event container_id:\"5b009908e9b3f58f0eb23e644ea9e9b3b3d6222e729b48ca27417c7d3b58d0cb\" id:\"5b009908e9b3f58f0eb23e644ea9e9b3b3d6222e729b48ca27417c7d3b58d0cb\" pid:4630 exited_at:{seconds:1751968628 nanos:770020945}" Jul 8 09:57:08.778448 containerd[1500]: time="2025-07-08T09:57:08.778422444Z" level=info msg="StartContainer for \"5b009908e9b3f58f0eb23e644ea9e9b3b3d6222e729b48ca27417c7d3b58d0cb\" returns successfully" Jul 8 09:57:08.788221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b009908e9b3f58f0eb23e644ea9e9b3b3d6222e729b48ca27417c7d3b58d0cb-rootfs.mount: Deactivated successfully. Jul 8 09:57:09.556760 kubelet[2654]: E0708 09:57:09.556724 2654 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 8 09:57:09.714190 containerd[1500]: time="2025-07-08T09:57:09.713765952Z" level=info msg="CreateContainer within sandbox \"d64d5c128e816630e29404432ac778eaa1c4fb10c970475560e68d1ec3f9a5ce\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 8 09:57:09.721806 containerd[1500]: time="2025-07-08T09:57:09.721768887Z" level=info msg="Container de2c4b2ace63dfdfa1c9d9a3681444f669b5165f9bc7fcac310177080fe8eb88: CDI devices from CRI Config.CDIDevices: []" Jul 8 09:57:09.730917 containerd[1500]: time="2025-07-08T09:57:09.730874749Z" level=info msg="CreateContainer within sandbox \"d64d5c128e816630e29404432ac778eaa1c4fb10c970475560e68d1ec3f9a5ce\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"de2c4b2ace63dfdfa1c9d9a3681444f669b5165f9bc7fcac310177080fe8eb88\"" Jul 8 09:57:09.731318 containerd[1500]: time="2025-07-08T09:57:09.731294632Z" level=info msg="StartContainer for \"de2c4b2ace63dfdfa1c9d9a3681444f669b5165f9bc7fcac310177080fe8eb88\"" Jul 8 09:57:09.732337 containerd[1500]: time="2025-07-08T09:57:09.732289159Z" level=info msg="connecting to shim de2c4b2ace63dfdfa1c9d9a3681444f669b5165f9bc7fcac310177080fe8eb88" address="unix:///run/containerd/s/50109b6f23711ba7293e280764a939edf42b853c23bfccdd6712976a02c7e2ec" protocol=ttrpc version=3 Jul 8 09:57:09.750373 systemd[1]: Started cri-containerd-de2c4b2ace63dfdfa1c9d9a3681444f669b5165f9bc7fcac310177080fe8eb88.scope - libcontainer container de2c4b2ace63dfdfa1c9d9a3681444f669b5165f9bc7fcac310177080fe8eb88. Jul 8 09:57:09.774772 containerd[1500]: time="2025-07-08T09:57:09.774681810Z" level=info msg="StartContainer for \"de2c4b2ace63dfdfa1c9d9a3681444f669b5165f9bc7fcac310177080fe8eb88\" returns successfully" Jul 8 09:57:09.830593 containerd[1500]: time="2025-07-08T09:57:09.830454353Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de2c4b2ace63dfdfa1c9d9a3681444f669b5165f9bc7fcac310177080fe8eb88\" id:\"975bb74b4411db5a45c956325aa498a2825cceab031a48075a8541c77357c083\" pid:4698 exited_at:{seconds:1751968629 nanos:830080191}" Jul 8 09:57:10.056183 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 8 09:57:10.744069 kubelet[2654]: I0708 09:57:10.743783 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fvmcm" podStartSLOduration=4.74360813 podStartE2EDuration="4.74360813s" podCreationTimestamp="2025-07-08 09:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 09:57:10.74209592 +0000 UTC m=+76.361805136" watchObservedRunningTime="2025-07-08 09:57:10.74360813 +0000 UTC m=+76.363317106" Jul 8 09:57:12.672707 containerd[1500]: time="2025-07-08T09:57:12.672632252Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de2c4b2ace63dfdfa1c9d9a3681444f669b5165f9bc7fcac310177080fe8eb88\" id:\"f6a2668b4ee117c85d44807b3ced05982bc286fbbd609c9472ade1078312290d\" pid:5129 exit_status:1 exited_at:{seconds:1751968632 nanos:672017768}" Jul 8 09:57:12.820568 systemd-networkd[1436]: lxc_health: Link UP Jul 8 09:57:12.830463 systemd-networkd[1436]: lxc_health: Gained carrier Jul 8 09:57:13.895339 systemd-networkd[1436]: lxc_health: Gained IPv6LL Jul 8 09:57:14.799927 containerd[1500]: time="2025-07-08T09:57:14.799869657Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de2c4b2ace63dfdfa1c9d9a3681444f669b5165f9bc7fcac310177080fe8eb88\" id:\"0217925c9c74b63fce97fa448e04b82dcf219a134173535de9d8de71b4a2f114\" pid:5242 exited_at:{seconds:1751968634 nanos:799391534}" Jul 8 09:57:14.803308 kubelet[2654]: E0708 09:57:14.803226 2654 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42682->127.0.0.1:35359: write tcp 127.0.0.1:42682->127.0.0.1:35359: write: broken pipe Jul 8 09:57:16.903781 containerd[1500]: time="2025-07-08T09:57:16.903737815Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de2c4b2ace63dfdfa1c9d9a3681444f669b5165f9bc7fcac310177080fe8eb88\" id:\"3ddd7ba373ed1dfefa5f278f191b145b424b37c84ce0e643972017b4f7df66af\" pid:5275 exited_at:{seconds:1751968636 nanos:903450093}" Jul 8 09:57:19.008181 containerd[1500]: time="2025-07-08T09:57:19.008127121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de2c4b2ace63dfdfa1c9d9a3681444f669b5165f9bc7fcac310177080fe8eb88\" id:\"728734ed461822939288032b6e3cb9b363988be0b057c9ca67b05f4356faa32a\" pid:5299 exited_at:{seconds:1751968639 nanos:7848720}" Jul 8 09:57:19.022187 sshd[4435]: Connection closed by 10.0.0.1 port 58482 Jul 8 09:57:19.022083 sshd-session[4428]: pam_unix(sshd:session): session closed for user core Jul 8 09:57:19.026363 systemd[1]: sshd@24-10.0.0.112:22-10.0.0.1:58482.service: Deactivated successfully. Jul 8 09:57:19.028641 systemd[1]: session-25.scope: Deactivated successfully. Jul 8 09:57:19.030433 systemd-logind[1484]: Session 25 logged out. Waiting for processes to exit. Jul 8 09:57:19.031553 systemd-logind[1484]: Removed session 25.