Sep 12 23:41:36.764023 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 23:41:36.764042 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri Sep 12 22:15:14 -00 2025 Sep 12 23:41:36.764051 kernel: KASLR enabled Sep 12 23:41:36.764057 kernel: efi: EFI v2.7 by EDK II Sep 12 23:41:36.764062 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 12 23:41:36.764068 kernel: random: crng init done Sep 12 23:41:36.764074 kernel: secureboot: Secure boot disabled Sep 12 23:41:36.764080 kernel: ACPI: Early table checksum verification disabled Sep 12 23:41:36.764085 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 12 23:41:36.764092 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 12 23:41:36.764098 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:41:36.764104 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:41:36.764109 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:41:36.764115 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:41:36.764122 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:41:36.764129 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:41:36.764136 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:41:36.764141 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:41:36.764147 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:41:36.764153 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 12 23:41:36.764159 kernel: ACPI: Use ACPI SPCR as default console: No Sep 12 23:41:36.764165 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 23:41:36.764171 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 12 23:41:36.764177 kernel: Zone ranges: Sep 12 23:41:36.764183 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 23:41:36.764190 kernel: DMA32 empty Sep 12 23:41:36.764196 kernel: Normal empty Sep 12 23:41:36.764202 kernel: Device empty Sep 12 23:41:36.764208 kernel: Movable zone start for each node Sep 12 23:41:36.764214 kernel: Early memory node ranges Sep 12 23:41:36.764220 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 12 23:41:36.764225 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 12 23:41:36.764231 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 12 23:41:36.764258 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 12 23:41:36.764265 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 12 23:41:36.764271 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 12 23:41:36.764277 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 12 23:41:36.764284 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 12 23:41:36.764290 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 12 23:41:36.764296 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 12 23:41:36.764305 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 12 23:41:36.764311 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 12 23:41:36.764318 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 12 23:41:36.764325 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 23:41:36.764332 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 12 23:41:36.764338 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 12 23:41:36.764344 kernel: psci: probing for conduit method from ACPI. Sep 12 23:41:36.764350 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 23:41:36.764357 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 23:41:36.764363 kernel: psci: Trusted OS migration not required Sep 12 23:41:36.764369 kernel: psci: SMC Calling Convention v1.1 Sep 12 23:41:36.764376 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 23:41:36.764382 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 12 23:41:36.764389 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 12 23:41:36.764396 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 12 23:41:36.764402 kernel: Detected PIPT I-cache on CPU0 Sep 12 23:41:36.764409 kernel: CPU features: detected: GIC system register CPU interface Sep 12 23:41:36.764415 kernel: CPU features: detected: Spectre-v4 Sep 12 23:41:36.764421 kernel: CPU features: detected: Spectre-BHB Sep 12 23:41:36.764427 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 23:41:36.764434 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 23:41:36.764440 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 23:41:36.764446 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 23:41:36.764452 kernel: alternatives: applying boot alternatives Sep 12 23:41:36.764460 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=24c67f2f39578656f2256031b807ae9c943b42e628f6df7d0e56546910a5aaaa Sep 12 23:41:36.764468 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 23:41:36.764474 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 23:41:36.764480 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 23:41:36.764487 kernel: Fallback order for Node 0: 0 Sep 12 23:41:36.764493 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 12 23:41:36.764499 kernel: Policy zone: DMA Sep 12 23:41:36.764505 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 23:41:36.764511 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 12 23:41:36.764518 kernel: software IO TLB: area num 4. Sep 12 23:41:36.764524 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 12 23:41:36.764530 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 12 23:41:36.764538 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 23:41:36.764544 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 23:41:36.764551 kernel: rcu: RCU event tracing is enabled. Sep 12 23:41:36.764558 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 23:41:36.764565 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 23:41:36.764571 kernel: Tracing variant of Tasks RCU enabled. Sep 12 23:41:36.764577 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 23:41:36.764584 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 23:41:36.764590 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:41:36.764597 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:41:36.764603 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 23:41:36.764611 kernel: GICv3: 256 SPIs implemented Sep 12 23:41:36.764617 kernel: GICv3: 0 Extended SPIs implemented Sep 12 23:41:36.764624 kernel: Root IRQ handler: gic_handle_irq Sep 12 23:41:36.764630 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 23:41:36.764636 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 12 23:41:36.764643 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 23:41:36.764649 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 23:41:36.764655 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 12 23:41:36.764662 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 12 23:41:36.764668 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 12 23:41:36.764675 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 12 23:41:36.764681 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 23:41:36.764689 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:41:36.764695 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 23:41:36.764702 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 23:41:36.764709 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 23:41:36.764715 kernel: arm-pv: using stolen time PV Sep 12 23:41:36.764722 kernel: Console: colour dummy device 80x25 Sep 12 23:41:36.764729 kernel: ACPI: Core revision 20240827 Sep 12 23:41:36.764735 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 23:41:36.764742 kernel: pid_max: default: 32768 minimum: 301 Sep 12 23:41:36.764748 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 23:41:36.764763 kernel: landlock: Up and running. Sep 12 23:41:36.764769 kernel: SELinux: Initializing. Sep 12 23:41:36.764776 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:41:36.764783 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:41:36.764789 kernel: rcu: Hierarchical SRCU implementation. Sep 12 23:41:36.764796 kernel: rcu: Max phase no-delay instances is 400. Sep 12 23:41:36.764802 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 23:41:36.764809 kernel: Remapping and enabling EFI services. Sep 12 23:41:36.764816 kernel: smp: Bringing up secondary CPUs ... Sep 12 23:41:36.764829 kernel: Detected PIPT I-cache on CPU1 Sep 12 23:41:36.764836 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 23:41:36.764842 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 12 23:41:36.764851 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:41:36.764858 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 23:41:36.764864 kernel: Detected PIPT I-cache on CPU2 Sep 12 23:41:36.764871 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 12 23:41:36.764878 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 12 23:41:36.764886 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:41:36.764893 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 12 23:41:36.764900 kernel: Detected PIPT I-cache on CPU3 Sep 12 23:41:36.764907 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 12 23:41:36.764914 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 12 23:41:36.764920 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:41:36.764927 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 12 23:41:36.764934 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 23:41:36.764941 kernel: SMP: Total of 4 processors activated. Sep 12 23:41:36.764949 kernel: CPU: All CPU(s) started at EL1 Sep 12 23:41:36.764956 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 23:41:36.764963 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 23:41:36.764970 kernel: CPU features: detected: Common not Private translations Sep 12 23:41:36.764977 kernel: CPU features: detected: CRC32 instructions Sep 12 23:41:36.764983 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 23:41:36.764990 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 23:41:36.764997 kernel: CPU features: detected: LSE atomic instructions Sep 12 23:41:36.765004 kernel: CPU features: detected: Privileged Access Never Sep 12 23:41:36.765012 kernel: CPU features: detected: RAS Extension Support Sep 12 23:41:36.765019 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 23:41:36.765026 kernel: alternatives: applying system-wide alternatives Sep 12 23:41:36.765033 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 12 23:41:36.765040 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2440K rwdata, 9084K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 12 23:41:36.765047 kernel: devtmpfs: initialized Sep 12 23:41:36.765054 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 23:41:36.765061 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 23:41:36.765068 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 23:41:36.765076 kernel: 0 pages in range for non-PLT usage Sep 12 23:41:36.765083 kernel: 508560 pages in range for PLT usage Sep 12 23:41:36.765089 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 23:41:36.765096 kernel: SMBIOS 3.0.0 present. Sep 12 23:41:36.765103 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 12 23:41:36.765110 kernel: DMI: Memory slots populated: 1/1 Sep 12 23:41:36.765117 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 23:41:36.765124 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 23:41:36.765131 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 23:41:36.765139 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 23:41:36.765146 kernel: audit: initializing netlink subsys (disabled) Sep 12 23:41:36.765153 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 12 23:41:36.765160 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 23:41:36.765166 kernel: cpuidle: using governor menu Sep 12 23:41:36.765173 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 23:41:36.765180 kernel: ASID allocator initialised with 32768 entries Sep 12 23:41:36.765187 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 23:41:36.765194 kernel: Serial: AMBA PL011 UART driver Sep 12 23:41:36.765202 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 23:41:36.765209 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 23:41:36.765216 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 23:41:36.765223 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 23:41:36.765229 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 23:41:36.765280 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 23:41:36.765289 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 23:41:36.765296 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 23:41:36.765303 kernel: ACPI: Added _OSI(Module Device) Sep 12 23:41:36.765312 kernel: ACPI: Added _OSI(Processor Device) Sep 12 23:41:36.765319 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 23:41:36.765325 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 23:41:36.765332 kernel: ACPI: Interpreter enabled Sep 12 23:41:36.765339 kernel: ACPI: Using GIC for interrupt routing Sep 12 23:41:36.765345 kernel: ACPI: MCFG table detected, 1 entries Sep 12 23:41:36.765352 kernel: ACPI: CPU0 has been hot-added Sep 12 23:41:36.765359 kernel: ACPI: CPU1 has been hot-added Sep 12 23:41:36.765366 kernel: ACPI: CPU2 has been hot-added Sep 12 23:41:36.765372 kernel: ACPI: CPU3 has been hot-added Sep 12 23:41:36.765386 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 23:41:36.765392 kernel: printk: legacy console [ttyAMA0] enabled Sep 12 23:41:36.765399 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 23:41:36.765537 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 23:41:36.765604 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 23:41:36.765666 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 23:41:36.765726 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 23:41:36.765804 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 23:41:36.765814 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 23:41:36.765821 kernel: PCI host bridge to bus 0000:00 Sep 12 23:41:36.765888 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 23:41:36.765959 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 23:41:36.766014 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 23:41:36.766067 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 23:41:36.766159 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 12 23:41:36.766233 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 23:41:36.766314 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 12 23:41:36.766376 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 12 23:41:36.766438 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 23:41:36.766499 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 12 23:41:36.766560 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 12 23:41:36.766625 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 12 23:41:36.766680 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 23:41:36.766735 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 23:41:36.766801 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 23:41:36.766811 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 23:41:36.766818 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 23:41:36.766825 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 23:41:36.766835 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 23:41:36.766842 kernel: iommu: Default domain type: Translated Sep 12 23:41:36.766848 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 23:41:36.766855 kernel: efivars: Registered efivars operations Sep 12 23:41:36.766862 kernel: vgaarb: loaded Sep 12 23:41:36.766869 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 23:41:36.766876 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 23:41:36.766883 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 23:41:36.766890 kernel: pnp: PnP ACPI init Sep 12 23:41:36.766959 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 23:41:36.766969 kernel: pnp: PnP ACPI: found 1 devices Sep 12 23:41:36.766976 kernel: NET: Registered PF_INET protocol family Sep 12 23:41:36.766983 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 23:41:36.766990 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 23:41:36.766997 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 23:41:36.767004 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 23:41:36.767011 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 23:41:36.767019 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 23:41:36.767026 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:41:36.767033 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:41:36.767040 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 23:41:36.767047 kernel: PCI: CLS 0 bytes, default 64 Sep 12 23:41:36.767054 kernel: kvm [1]: HYP mode not available Sep 12 23:41:36.767061 kernel: Initialise system trusted keyrings Sep 12 23:41:36.767068 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 23:41:36.767075 kernel: Key type asymmetric registered Sep 12 23:41:36.767083 kernel: Asymmetric key parser 'x509' registered Sep 12 23:41:36.767090 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 12 23:41:36.767097 kernel: io scheduler mq-deadline registered Sep 12 23:41:36.767104 kernel: io scheduler kyber registered Sep 12 23:41:36.767111 kernel: io scheduler bfq registered Sep 12 23:41:36.767118 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 23:41:36.767125 kernel: ACPI: button: Power Button [PWRB] Sep 12 23:41:36.767133 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 23:41:36.767196 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 12 23:41:36.767207 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 23:41:36.767214 kernel: thunder_xcv, ver 1.0 Sep 12 23:41:36.767226 kernel: thunder_bgx, ver 1.0 Sep 12 23:41:36.767233 kernel: nicpf, ver 1.0 Sep 12 23:41:36.767256 kernel: nicvf, ver 1.0 Sep 12 23:41:36.767332 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 23:41:36.767398 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T23:41:36 UTC (1757720496) Sep 12 23:41:36.767407 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 23:41:36.767417 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 12 23:41:36.767424 kernel: watchdog: NMI not fully supported Sep 12 23:41:36.767431 kernel: watchdog: Hard watchdog permanently disabled Sep 12 23:41:36.767438 kernel: NET: Registered PF_INET6 protocol family Sep 12 23:41:36.767444 kernel: Segment Routing with IPv6 Sep 12 23:41:36.767451 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 23:41:36.767459 kernel: NET: Registered PF_PACKET protocol family Sep 12 23:41:36.767466 kernel: Key type dns_resolver registered Sep 12 23:41:36.767473 kernel: registered taskstats version 1 Sep 12 23:41:36.767480 kernel: Loading compiled-in X.509 certificates Sep 12 23:41:36.767488 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 4d2b25dbd7cb4cb70d9284570c2ea7dd89d62e99' Sep 12 23:41:36.767495 kernel: Demotion targets for Node 0: null Sep 12 23:41:36.767502 kernel: Key type .fscrypt registered Sep 12 23:41:36.767509 kernel: Key type fscrypt-provisioning registered Sep 12 23:41:36.767516 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 23:41:36.767527 kernel: ima: Allocated hash algorithm: sha1 Sep 12 23:41:36.767534 kernel: ima: No architecture policies found Sep 12 23:41:36.767541 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 23:41:36.767549 kernel: clk: Disabling unused clocks Sep 12 23:41:36.767556 kernel: PM: genpd: Disabling unused power domains Sep 12 23:41:36.767563 kernel: Warning: unable to open an initial console. Sep 12 23:41:36.767570 kernel: Freeing unused kernel memory: 38976K Sep 12 23:41:36.767577 kernel: Run /init as init process Sep 12 23:41:36.767584 kernel: with arguments: Sep 12 23:41:36.767591 kernel: /init Sep 12 23:41:36.767597 kernel: with environment: Sep 12 23:41:36.767604 kernel: HOME=/ Sep 12 23:41:36.767612 kernel: TERM=linux Sep 12 23:41:36.767619 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 23:41:36.767627 systemd[1]: Successfully made /usr/ read-only. Sep 12 23:41:36.767636 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 23:41:36.767644 systemd[1]: Detected virtualization kvm. Sep 12 23:41:36.767652 systemd[1]: Detected architecture arm64. Sep 12 23:41:36.767659 systemd[1]: Running in initrd. Sep 12 23:41:36.767666 systemd[1]: No hostname configured, using default hostname. Sep 12 23:41:36.767675 systemd[1]: Hostname set to . Sep 12 23:41:36.767682 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:41:36.767690 systemd[1]: Queued start job for default target initrd.target. Sep 12 23:41:36.767697 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:41:36.767705 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:41:36.767712 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 23:41:36.767720 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:41:36.767727 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 23:41:36.767737 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 23:41:36.767746 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 23:41:36.767759 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 23:41:36.767768 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:41:36.767775 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:41:36.767783 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:41:36.767790 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:41:36.767799 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:41:36.767807 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:41:36.767814 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:41:36.767822 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:41:36.767829 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:41:36.767837 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 23:41:36.767844 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:41:36.767852 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:41:36.767860 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:41:36.767868 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:41:36.767875 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 23:41:36.767883 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:41:36.767890 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 23:41:36.767898 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 23:41:36.767905 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 23:41:36.767913 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:41:36.767920 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:41:36.767929 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:41:36.767936 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 23:41:36.767944 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:41:36.767952 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 23:41:36.767976 systemd-journald[245]: Collecting audit messages is disabled. Sep 12 23:41:36.767994 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:41:36.768003 systemd-journald[245]: Journal started Sep 12 23:41:36.768022 systemd-journald[245]: Runtime Journal (/run/log/journal/f8a2e94f941a40a39d215e787572ae85) is 6M, max 48.5M, 42.4M free. Sep 12 23:41:36.760718 systemd-modules-load[246]: Inserted module 'overlay' Sep 12 23:41:36.770271 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:41:36.775695 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 23:41:36.772895 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:41:36.777161 kernel: Bridge firewalling registered Sep 12 23:41:36.774872 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:41:36.776094 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:41:36.776202 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 12 23:41:36.783359 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:41:36.786209 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:41:36.787747 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:41:36.788355 systemd-tmpfiles[265]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 23:41:36.789096 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:41:36.803422 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:41:36.810153 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:41:36.811466 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:41:36.815377 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:41:36.817195 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:41:36.819096 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 23:41:36.839555 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=24c67f2f39578656f2256031b807ae9c943b42e628f6df7d0e56546910a5aaaa Sep 12 23:41:36.852774 systemd-resolved[286]: Positive Trust Anchors: Sep 12 23:41:36.852793 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:41:36.852829 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:41:36.857601 systemd-resolved[286]: Defaulting to hostname 'linux'. Sep 12 23:41:36.858588 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:41:36.861077 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:41:36.913262 kernel: SCSI subsystem initialized Sep 12 23:41:36.917254 kernel: Loading iSCSI transport class v2.0-870. Sep 12 23:41:36.926308 kernel: iscsi: registered transport (tcp) Sep 12 23:41:36.937511 kernel: iscsi: registered transport (qla4xxx) Sep 12 23:41:36.937549 kernel: QLogic iSCSI HBA Driver Sep 12 23:41:36.953324 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:41:36.969274 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:41:36.971045 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:41:37.012214 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 23:41:37.014157 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 23:41:37.075257 kernel: raid6: neonx8 gen() 15716 MB/s Sep 12 23:41:37.092252 kernel: raid6: neonx4 gen() 15757 MB/s Sep 12 23:41:37.109251 kernel: raid6: neonx2 gen() 13135 MB/s Sep 12 23:41:37.126250 kernel: raid6: neonx1 gen() 10520 MB/s Sep 12 23:41:37.143251 kernel: raid6: int64x8 gen() 6874 MB/s Sep 12 23:41:37.160250 kernel: raid6: int64x4 gen() 7306 MB/s Sep 12 23:41:37.177256 kernel: raid6: int64x2 gen() 6079 MB/s Sep 12 23:41:37.194274 kernel: raid6: int64x1 gen() 5040 MB/s Sep 12 23:41:37.194319 kernel: raid6: using algorithm neonx4 gen() 15757 MB/s Sep 12 23:41:37.211251 kernel: raid6: .... xor() 12285 MB/s, rmw enabled Sep 12 23:41:37.211270 kernel: raid6: using neon recovery algorithm Sep 12 23:41:37.216251 kernel: xor: measuring software checksum speed Sep 12 23:41:37.216269 kernel: 8regs : 20650 MB/sec Sep 12 23:41:37.217731 kernel: 32regs : 20020 MB/sec Sep 12 23:41:37.217745 kernel: arm64_neon : 27295 MB/sec Sep 12 23:41:37.217757 kernel: xor: using function: arm64_neon (27295 MB/sec) Sep 12 23:41:37.269266 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 23:41:37.275234 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:41:37.278423 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:41:37.308811 systemd-udevd[498]: Using default interface naming scheme 'v255'. Sep 12 23:41:37.312849 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:41:37.314483 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 23:41:37.341079 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Sep 12 23:41:37.361614 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:41:37.363500 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:41:37.420372 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:41:37.423466 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 23:41:37.464515 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 12 23:41:37.464954 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 23:41:37.469337 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 23:41:37.469375 kernel: GPT:9289727 != 19775487 Sep 12 23:41:37.470247 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 23:41:37.471609 kernel: GPT:9289727 != 19775487 Sep 12 23:41:37.471637 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 23:41:37.472413 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:41:37.476919 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:41:37.477046 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:41:37.479481 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:41:37.482553 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:41:37.510951 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 23:41:37.512149 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:41:37.521425 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 23:41:37.522549 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 23:41:37.533321 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 23:41:37.540080 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 23:41:37.541075 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 23:41:37.553436 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:41:37.554335 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:41:37.555932 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:41:37.558196 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 23:41:37.559849 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 23:41:37.589315 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:41:37.589513 disk-uuid[590]: Primary Header is updated. Sep 12 23:41:37.589513 disk-uuid[590]: Secondary Entries is updated. Sep 12 23:41:37.589513 disk-uuid[590]: Secondary Header is updated. Sep 12 23:41:37.590717 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:41:38.601468 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:41:38.601515 disk-uuid[598]: The operation has completed successfully. Sep 12 23:41:38.632325 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 23:41:38.632423 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 23:41:38.659773 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 23:41:38.672991 sh[611]: Success Sep 12 23:41:38.684603 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 23:41:38.684650 kernel: device-mapper: uevent: version 1.0.3 Sep 12 23:41:38.685498 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 23:41:38.692271 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 12 23:41:38.720064 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 23:41:38.721626 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 23:41:38.726217 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 23:41:38.734047 kernel: BTRFS: device fsid 103b8b46-5d84-49b9-83b1-52780b53e7b3 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (623) Sep 12 23:41:38.734077 kernel: BTRFS info (device dm-0): first mount of filesystem 103b8b46-5d84-49b9-83b1-52780b53e7b3 Sep 12 23:41:38.734087 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:41:38.738253 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 23:41:38.738276 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 23:41:38.738984 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 23:41:38.740012 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 23:41:38.741109 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 23:41:38.741792 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 23:41:38.743076 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 23:41:38.763354 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (653) Sep 12 23:41:38.763382 kernel: BTRFS info (device vda6): first mount of filesystem d14b678d-b2cf-466a-9c6e-b6d9277deb1d Sep 12 23:41:38.764850 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:41:38.767263 kernel: BTRFS info (device vda6): turning on async discard Sep 12 23:41:38.767294 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 23:41:38.771250 kernel: BTRFS info (device vda6): last unmount of filesystem d14b678d-b2cf-466a-9c6e-b6d9277deb1d Sep 12 23:41:38.773285 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 23:41:38.775021 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 23:41:38.831707 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:41:38.834845 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:41:38.870808 systemd-networkd[795]: lo: Link UP Sep 12 23:41:38.871528 systemd-networkd[795]: lo: Gained carrier Sep 12 23:41:38.872117 ignition[701]: Ignition 2.21.0 Sep 12 23:41:38.872327 systemd-networkd[795]: Enumeration completed Sep 12 23:41:38.872124 ignition[701]: Stage: fetch-offline Sep 12 23:41:38.872568 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:41:38.872150 ignition[701]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:41:38.873636 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:41:38.872157 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:41:38.873640 systemd-networkd[795]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:41:38.872321 ignition[701]: parsed url from cmdline: "" Sep 12 23:41:38.873882 systemd[1]: Reached target network.target - Network. Sep 12 23:41:38.872324 ignition[701]: no config URL provided Sep 12 23:41:38.874370 systemd-networkd[795]: eth0: Link UP Sep 12 23:41:38.872329 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:41:38.874458 systemd-networkd[795]: eth0: Gained carrier Sep 12 23:41:38.872335 ignition[701]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:41:38.874467 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:41:38.872352 ignition[701]: op(1): [started] loading QEMU firmware config module Sep 12 23:41:38.872356 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 23:41:38.884481 ignition[701]: op(1): [finished] loading QEMU firmware config module Sep 12 23:41:38.890290 systemd-networkd[795]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 23:41:38.926378 ignition[701]: parsing config with SHA512: 92dd36be8588cba9ab290778b6682ea57178f23b75532cd1738abcf0b3ca749bc95d9c26a89cb9cdfa34db4d12a10fb222bd1b3bd8765862204a47210a54ce71 Sep 12 23:41:38.931622 unknown[701]: fetched base config from "system" Sep 12 23:41:38.931634 unknown[701]: fetched user config from "qemu" Sep 12 23:41:38.932014 ignition[701]: fetch-offline: fetch-offline passed Sep 12 23:41:38.932066 ignition[701]: Ignition finished successfully Sep 12 23:41:38.934093 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:41:38.936056 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 23:41:38.938346 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 23:41:38.966145 ignition[808]: Ignition 2.21.0 Sep 12 23:41:38.966164 ignition[808]: Stage: kargs Sep 12 23:41:38.966321 ignition[808]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:41:38.966331 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:41:38.969332 ignition[808]: kargs: kargs passed Sep 12 23:41:38.969390 ignition[808]: Ignition finished successfully Sep 12 23:41:38.972473 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 23:41:38.974915 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 23:41:39.013908 ignition[816]: Ignition 2.21.0 Sep 12 23:41:39.013925 ignition[816]: Stage: disks Sep 12 23:41:39.014065 ignition[816]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:41:39.014073 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:41:39.015838 ignition[816]: disks: disks passed Sep 12 23:41:39.017467 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 23:41:39.015898 ignition[816]: Ignition finished successfully Sep 12 23:41:39.018447 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 23:41:39.019738 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:41:39.021107 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:41:39.022545 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:41:39.023934 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:41:39.025959 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 23:41:39.056786 systemd-fsck[827]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 23:41:39.060895 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 23:41:39.066454 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 23:41:39.129271 kernel: EXT4-fs (vda9): mounted filesystem 01c463ed-b282-4a97-bc2e-d1c81f25bb05 r/w with ordered data mode. Quota mode: none. Sep 12 23:41:39.129300 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 23:41:39.130339 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 23:41:39.132260 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:41:39.133674 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 23:41:39.134451 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 23:41:39.134488 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 23:41:39.134511 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:41:39.145691 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 23:41:39.147917 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 23:41:39.150391 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (835) Sep 12 23:41:39.152261 kernel: BTRFS info (device vda6): first mount of filesystem d14b678d-b2cf-466a-9c6e-b6d9277deb1d Sep 12 23:41:39.152291 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:41:39.154288 kernel: BTRFS info (device vda6): turning on async discard Sep 12 23:41:39.154321 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 23:41:39.155053 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:41:39.183207 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 23:41:39.187130 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Sep 12 23:41:39.190911 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 23:41:39.194327 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 23:41:39.257518 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 23:41:39.259474 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 23:41:39.260817 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 23:41:39.279267 kernel: BTRFS info (device vda6): last unmount of filesystem d14b678d-b2cf-466a-9c6e-b6d9277deb1d Sep 12 23:41:39.290323 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 23:41:39.295636 ignition[949]: INFO : Ignition 2.21.0 Sep 12 23:41:39.295636 ignition[949]: INFO : Stage: mount Sep 12 23:41:39.296854 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:41:39.296854 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:41:39.298426 ignition[949]: INFO : mount: mount passed Sep 12 23:41:39.298426 ignition[949]: INFO : Ignition finished successfully Sep 12 23:41:39.299690 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 23:41:39.301250 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 23:41:39.851487 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 23:41:39.853020 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:41:39.870335 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (962) Sep 12 23:41:39.870371 kernel: BTRFS info (device vda6): first mount of filesystem d14b678d-b2cf-466a-9c6e-b6d9277deb1d Sep 12 23:41:39.870382 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:41:39.873266 kernel: BTRFS info (device vda6): turning on async discard Sep 12 23:41:39.873289 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 23:41:39.874593 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:41:39.900810 ignition[979]: INFO : Ignition 2.21.0 Sep 12 23:41:39.900810 ignition[979]: INFO : Stage: files Sep 12 23:41:39.903041 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:41:39.903041 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:41:39.903041 ignition[979]: DEBUG : files: compiled without relabeling support, skipping Sep 12 23:41:39.906354 ignition[979]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 23:41:39.906354 ignition[979]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 23:41:39.909006 ignition[979]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 23:41:39.910069 ignition[979]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 23:41:39.910069 ignition[979]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 23:41:39.909622 unknown[979]: wrote ssh authorized keys file for user: core Sep 12 23:41:39.913011 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 23:41:39.913011 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 12 23:41:40.020726 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 23:41:40.478723 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 23:41:40.478723 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 23:41:40.481563 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 12 23:41:40.731339 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 23:41:40.828016 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 23:41:40.828016 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 23:41:40.830807 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 23:41:40.830807 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:41:40.830807 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:41:40.830807 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:41:40.830807 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:41:40.830807 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:41:40.830807 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:41:40.840245 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:41:40.840245 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:41:40.840245 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:41:40.840245 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:41:40.840245 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:41:40.840245 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 12 23:41:40.926437 systemd-networkd[795]: eth0: Gained IPv6LL Sep 12 23:41:41.197960 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 23:41:41.715111 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:41:41.715111 ignition[979]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 23:41:41.718245 ignition[979]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:41:41.718245 ignition[979]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:41:41.718245 ignition[979]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 23:41:41.718245 ignition[979]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 23:41:41.718245 ignition[979]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 23:41:41.718245 ignition[979]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 23:41:41.718245 ignition[979]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 23:41:41.718245 ignition[979]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 23:41:41.731028 ignition[979]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 23:41:41.733978 ignition[979]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 23:41:41.736371 ignition[979]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 23:41:41.736371 ignition[979]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 23:41:41.736371 ignition[979]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 23:41:41.736371 ignition[979]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:41:41.736371 ignition[979]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:41:41.736371 ignition[979]: INFO : files: files passed Sep 12 23:41:41.736371 ignition[979]: INFO : Ignition finished successfully Sep 12 23:41:41.737016 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 23:41:41.739118 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 23:41:41.740617 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 23:41:41.761207 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 23:41:41.761306 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 23:41:41.763745 initrd-setup-root-after-ignition[1007]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 23:41:41.765010 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:41:41.765010 initrd-setup-root-after-ignition[1010]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:41:41.767391 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:41:41.768272 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:41:41.769602 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 23:41:41.771928 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 23:41:41.807911 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 23:41:41.808747 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 23:41:41.809826 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 23:41:41.811143 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 23:41:41.812581 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 23:41:41.813230 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 23:41:41.838034 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:41:41.840054 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 23:41:41.860693 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:41:41.861634 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:41:41.863158 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 23:41:41.864583 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 23:41:41.864691 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:41:41.866639 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 23:41:41.868041 systemd[1]: Stopped target basic.target - Basic System. Sep 12 23:41:41.869275 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 23:41:41.870590 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:41:41.872065 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 23:41:41.873552 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 23:41:41.875154 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 23:41:41.876565 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:41:41.878048 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 23:41:41.879542 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 23:41:41.880848 systemd[1]: Stopped target swap.target - Swaps. Sep 12 23:41:41.882060 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 23:41:41.882165 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:41:41.883975 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:41:41.885420 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:41:41.886845 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 23:41:41.890276 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:41:41.891183 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 23:41:41.891312 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 23:41:41.893611 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 23:41:41.893720 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:41:41.895256 systemd[1]: Stopped target paths.target - Path Units. Sep 12 23:41:41.896465 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 23:41:41.899295 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:41:41.900293 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 23:41:41.902062 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 23:41:41.903254 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 23:41:41.903372 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:41:41.904542 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 23:41:41.904651 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:41:41.905802 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 23:41:41.905953 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:41:41.907306 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 23:41:41.907448 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 23:41:41.909313 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 23:41:41.911310 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 23:41:41.912042 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 23:41:41.912218 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:41:41.913630 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 23:41:41.913778 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:41:41.919765 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 23:41:41.920364 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 23:41:41.927839 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 23:41:41.934673 ignition[1036]: INFO : Ignition 2.21.0 Sep 12 23:41:41.934673 ignition[1036]: INFO : Stage: umount Sep 12 23:41:41.936981 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:41:41.936981 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:41:41.938653 ignition[1036]: INFO : umount: umount passed Sep 12 23:41:41.938653 ignition[1036]: INFO : Ignition finished successfully Sep 12 23:41:41.939815 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 23:41:41.939918 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 23:41:41.941534 systemd[1]: Stopped target network.target - Network. Sep 12 23:41:41.942618 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 23:41:41.942670 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 23:41:41.943948 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 23:41:41.943987 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 23:41:41.945230 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 23:41:41.945291 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 23:41:41.946690 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 23:41:41.946730 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 23:41:41.948286 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 23:41:41.949508 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 23:41:41.956021 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 23:41:41.956137 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 23:41:41.959520 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 23:41:41.959769 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 23:41:41.959813 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:41:41.962730 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 23:41:41.962967 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 23:41:41.963066 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 23:41:41.966144 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 23:41:41.967335 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 23:41:41.968921 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 23:41:41.968965 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:41:41.971155 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 23:41:41.971893 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 23:41:41.971940 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:41:41.973460 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:41:41.973499 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:41:41.975724 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 23:41:41.975771 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 23:41:41.977112 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:41:41.980339 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 23:41:41.993970 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 23:41:41.994120 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:41:41.996813 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 23:41:41.996895 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 23:41:41.998485 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 23:41:41.998548 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 23:41:41.999428 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 23:41:41.999459 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:41:42.000714 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 23:41:42.000766 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:41:42.002943 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 23:41:42.002986 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 23:41:42.005016 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:41:42.005061 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:41:42.007836 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 23:41:42.009220 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 23:41:42.009283 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:41:42.011850 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 23:41:42.011894 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:41:42.014258 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 23:41:42.014297 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:41:42.016865 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 23:41:42.016905 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:41:42.018751 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:41:42.018790 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:41:42.021680 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 23:41:42.023352 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 23:41:42.024562 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 23:41:42.024640 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 23:41:42.027779 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 23:41:42.027889 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 23:41:42.029424 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 23:41:42.031423 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 23:41:42.051192 systemd[1]: Switching root. Sep 12 23:41:42.088010 systemd-journald[245]: Journal stopped Sep 12 23:41:42.821051 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Sep 12 23:41:42.821097 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 23:41:42.821113 kernel: SELinux: policy capability open_perms=1 Sep 12 23:41:42.821123 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 23:41:42.821136 kernel: SELinux: policy capability always_check_network=0 Sep 12 23:41:42.821145 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 23:41:42.821155 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 23:41:42.821164 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 23:41:42.821174 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 23:41:42.821186 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 23:41:42.821195 kernel: audit: type=1403 audit(1757720502.269:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 23:41:42.821205 systemd[1]: Successfully loaded SELinux policy in 47.678ms. Sep 12 23:41:42.821221 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.981ms. Sep 12 23:41:42.821232 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 23:41:42.821257 systemd[1]: Detected virtualization kvm. Sep 12 23:41:42.821269 systemd[1]: Detected architecture arm64. Sep 12 23:41:42.821280 systemd[1]: Detected first boot. Sep 12 23:41:42.821290 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:41:42.821300 zram_generator::config[1084]: No configuration found. Sep 12 23:41:42.821311 kernel: NET: Registered PF_VSOCK protocol family Sep 12 23:41:42.821320 systemd[1]: Populated /etc with preset unit settings. Sep 12 23:41:42.821331 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 23:41:42.821342 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 23:41:42.821352 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 23:41:42.821362 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 23:41:42.821374 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 23:41:42.821384 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 23:41:42.821394 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 23:41:42.821406 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 23:41:42.821416 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 23:41:42.821426 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 23:41:42.821437 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 23:41:42.821447 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 23:41:42.821458 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:41:42.821468 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:41:42.821478 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 23:41:42.821488 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 23:41:42.821498 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 23:41:42.821508 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:41:42.821518 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 23:41:42.821528 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:41:42.821539 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:41:42.821550 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 23:41:42.821560 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 23:41:42.821570 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 23:41:42.821580 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 23:41:42.821589 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:41:42.821602 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:41:42.821612 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:41:42.821623 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:41:42.821635 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 23:41:42.821645 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 23:41:42.821655 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 23:41:42.821665 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:41:42.821675 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:41:42.821685 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:41:42.821695 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 23:41:42.821705 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 23:41:42.821715 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 23:41:42.821726 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 23:41:42.821742 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 23:41:42.821756 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 23:41:42.821767 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 23:41:42.821777 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 23:41:42.821787 systemd[1]: Reached target machines.target - Containers. Sep 12 23:41:42.821797 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 23:41:42.821807 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:41:42.821818 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:41:42.821828 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 23:41:42.821838 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:41:42.821849 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:41:42.821859 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:41:42.821868 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 23:41:42.821879 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:41:42.821889 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 23:41:42.821899 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 23:41:42.821910 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 23:41:42.821920 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 23:41:42.821931 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 23:41:42.821941 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:41:42.821951 kernel: fuse: init (API version 7.41) Sep 12 23:41:42.821963 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:41:42.821973 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:41:42.821984 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:41:42.821994 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 23:41:42.822005 kernel: ACPI: bus type drm_connector registered Sep 12 23:41:42.822014 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 23:41:42.822024 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:41:42.822034 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 23:41:42.822044 systemd[1]: Stopped verity-setup.service. Sep 12 23:41:42.822055 kernel: loop: module loaded Sep 12 23:41:42.822066 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 23:41:42.822075 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 23:41:42.822085 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 23:41:42.822095 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 23:41:42.822105 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 23:41:42.822114 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 23:41:42.822124 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 23:41:42.822136 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:41:42.822167 systemd-journald[1156]: Collecting audit messages is disabled. Sep 12 23:41:42.822190 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 23:41:42.822201 systemd-journald[1156]: Journal started Sep 12 23:41:42.822222 systemd-journald[1156]: Runtime Journal (/run/log/journal/f8a2e94f941a40a39d215e787572ae85) is 6M, max 48.5M, 42.4M free. Sep 12 23:41:42.626391 systemd[1]: Queued start job for default target multi-user.target. Sep 12 23:41:42.645130 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 23:41:42.645521 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 23:41:42.823801 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 23:41:42.826260 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:41:42.826660 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:41:42.826824 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:41:42.827936 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:41:42.828090 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:41:42.829183 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:41:42.829361 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:41:42.830485 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 23:41:42.831311 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 23:41:42.832349 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:41:42.832498 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:41:42.833567 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:41:42.834636 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:41:42.835840 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 23:41:42.837033 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 23:41:42.848392 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:41:42.850458 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 23:41:42.852159 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 23:41:42.853095 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 23:41:42.853144 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:41:42.854851 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 23:41:42.863219 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 23:41:42.864084 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:41:42.865078 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 23:41:42.866979 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 23:41:42.868057 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:41:42.868982 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 23:41:42.870482 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:41:42.871466 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:41:42.876898 systemd-journald[1156]: Time spent on flushing to /var/log/journal/f8a2e94f941a40a39d215e787572ae85 is 21.396ms for 889 entries. Sep 12 23:41:42.876898 systemd-journald[1156]: System Journal (/var/log/journal/f8a2e94f941a40a39d215e787572ae85) is 8M, max 195.6M, 187.6M free. Sep 12 23:41:42.913484 systemd-journald[1156]: Received client request to flush runtime journal. Sep 12 23:41:42.913527 kernel: loop0: detected capacity change from 0 to 107312 Sep 12 23:41:42.913544 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 23:41:42.875358 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 23:41:42.877892 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:41:42.881174 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:41:42.884015 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 23:41:42.886142 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 23:41:42.895875 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:41:42.899509 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 23:41:42.900487 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 23:41:42.902565 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 23:41:42.915225 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 23:41:42.922293 kernel: loop1: detected capacity change from 0 to 211168 Sep 12 23:41:42.925974 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Sep 12 23:41:42.925993 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Sep 12 23:41:42.928754 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 23:41:42.937287 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:41:42.939676 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 23:41:42.946292 kernel: loop2: detected capacity change from 0 to 138376 Sep 12 23:41:42.963880 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 23:41:42.966084 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:41:42.973271 kernel: loop3: detected capacity change from 0 to 107312 Sep 12 23:41:42.979254 kernel: loop4: detected capacity change from 0 to 211168 Sep 12 23:41:42.985252 kernel: loop5: detected capacity change from 0 to 138376 Sep 12 23:41:42.985866 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Sep 12 23:41:42.985890 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Sep 12 23:41:42.989471 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:41:42.992138 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 23:41:42.992497 (sd-merge)[1223]: Merged extensions into '/usr'. Sep 12 23:41:42.995813 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 23:41:42.995828 systemd[1]: Reloading... Sep 12 23:41:43.046324 zram_generator::config[1253]: No configuration found. Sep 12 23:41:43.116853 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:41:43.124948 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 23:41:43.177655 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 23:41:43.177943 systemd[1]: Reloading finished in 181 ms. Sep 12 23:41:43.204082 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 23:41:43.205306 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 23:41:43.215418 systemd[1]: Starting ensure-sysext.service... Sep 12 23:41:43.216912 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:41:43.224468 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Sep 12 23:41:43.224482 systemd[1]: Reloading... Sep 12 23:41:43.231190 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 23:41:43.231220 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 23:41:43.231796 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 23:41:43.232060 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 23:41:43.232759 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 23:41:43.233059 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Sep 12 23:41:43.233162 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Sep 12 23:41:43.235721 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:41:43.235815 systemd-tmpfiles[1286]: Skipping /boot Sep 12 23:41:43.244345 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:41:43.244427 systemd-tmpfiles[1286]: Skipping /boot Sep 12 23:41:43.277261 zram_generator::config[1313]: No configuration found. Sep 12 23:41:43.339309 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:41:43.400233 systemd[1]: Reloading finished in 175 ms. Sep 12 23:41:43.409678 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 23:41:43.414810 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:41:43.424328 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 23:41:43.426370 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 23:41:43.428177 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 23:41:43.432372 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:41:43.434780 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:41:43.438689 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 23:41:43.443467 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:41:43.450939 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:41:43.453659 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:41:43.456116 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:41:43.458504 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:41:43.458611 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:41:43.468698 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 23:41:43.472267 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 23:41:43.473039 systemd-udevd[1359]: Using default interface naming scheme 'v255'. Sep 12 23:41:43.473680 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:41:43.473874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:41:43.475332 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:41:43.475468 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:41:43.476827 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:41:43.476955 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:41:43.477119 augenrules[1377]: No rules Sep 12 23:41:43.478213 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:41:43.478384 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 23:41:43.487543 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 23:41:43.490538 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:41:43.494774 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 23:41:43.495608 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:41:43.496453 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:41:43.500356 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:41:43.506954 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:41:43.510429 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:41:43.511399 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:41:43.511442 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:41:43.512753 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:41:43.514857 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 23:41:43.518517 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 23:41:43.521311 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:41:43.521915 systemd[1]: Finished ensure-sysext.service. Sep 12 23:41:43.522843 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:41:43.522985 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:41:43.524169 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:41:43.524338 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:41:43.525232 augenrules[1401]: /sbin/augenrules: No change Sep 12 23:41:43.529232 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:41:43.531274 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:41:43.533569 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:41:43.533697 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:41:43.538467 augenrules[1444]: No rules Sep 12 23:41:43.539705 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 23:41:43.541549 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:41:43.541717 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 23:41:43.551093 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:41:43.551145 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:41:43.554211 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 23:41:43.595724 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 23:41:43.609010 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 23:41:43.613741 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 23:41:43.627415 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 23:41:43.638638 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 23:41:43.702381 systemd-networkd[1422]: lo: Link UP Sep 12 23:41:43.702392 systemd-networkd[1422]: lo: Gained carrier Sep 12 23:41:43.703171 systemd-networkd[1422]: Enumeration completed Sep 12 23:41:43.703603 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:41:43.705775 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:41:43.705784 systemd-networkd[1422]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:41:43.706126 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 23:41:43.707477 systemd-networkd[1422]: eth0: Link UP Sep 12 23:41:43.707585 systemd-networkd[1422]: eth0: Gained carrier Sep 12 23:41:43.707602 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:41:43.709139 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 23:41:43.724453 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:41:43.729002 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 23:41:43.729991 systemd-networkd[1422]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 23:41:43.730898 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 23:41:43.733416 systemd-timesyncd[1452]: Network configuration changed, trying to establish connection. Sep 12 23:41:43.734421 systemd-timesyncd[1452]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 23:41:43.734521 systemd-timesyncd[1452]: Initial clock synchronization to Fri 2025-09-12 23:41:43.496946 UTC. Sep 12 23:41:43.735529 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 23:41:43.737514 systemd-resolved[1352]: Positive Trust Anchors: Sep 12 23:41:43.737528 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:41:43.737561 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:41:43.745259 systemd-resolved[1352]: Defaulting to hostname 'linux'. Sep 12 23:41:43.746800 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:41:43.749006 systemd[1]: Reached target network.target - Network. Sep 12 23:41:43.751378 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:41:43.777038 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:41:43.778209 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:41:43.779145 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 23:41:43.780163 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 23:41:43.781345 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 23:41:43.782211 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 23:41:43.783134 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 23:41:43.784296 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 23:41:43.784326 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:41:43.784972 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:41:43.786723 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 23:41:43.788696 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 23:41:43.791463 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 23:41:43.792545 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 23:41:43.793536 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 23:41:43.798063 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 23:41:43.799323 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 23:41:43.800716 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 23:41:43.801625 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:41:43.802386 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:41:43.803094 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:41:43.803119 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:41:43.804057 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 23:41:43.805829 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 23:41:43.807459 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 23:41:43.809633 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 23:41:43.811506 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 23:41:43.812418 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 23:41:43.813630 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 23:41:43.816378 jq[1501]: false Sep 12 23:41:43.816367 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 23:41:43.817943 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 23:41:43.821233 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 23:41:43.824529 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 23:41:43.825195 extend-filesystems[1502]: Found /dev/vda6 Sep 12 23:41:43.827295 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 23:41:43.827752 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 23:41:43.828581 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 23:41:43.831069 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 23:41:43.834403 extend-filesystems[1502]: Found /dev/vda9 Sep 12 23:41:43.834775 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 23:41:43.837427 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 23:41:43.837716 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 23:41:43.837886 extend-filesystems[1502]: Checking size of /dev/vda9 Sep 12 23:41:43.838105 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 23:41:43.839342 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 23:41:43.840059 jq[1518]: true Sep 12 23:41:43.841646 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 23:41:43.841830 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 23:41:43.852422 extend-filesystems[1502]: Resized partition /dev/vda9 Sep 12 23:41:43.859009 extend-filesystems[1537]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 23:41:43.863564 update_engine[1517]: I20250912 23:41:43.861020 1517 main.cc:92] Flatcar Update Engine starting Sep 12 23:41:43.872328 jq[1527]: true Sep 12 23:41:43.873262 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 23:41:43.890287 dbus-daemon[1499]: [system] SELinux support is enabled Sep 12 23:41:43.888781 (ntainerd)[1542]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 23:41:43.890543 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 23:41:43.903255 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 23:41:43.911201 tar[1524]: linux-arm64/LICENSE Sep 12 23:41:43.911543 update_engine[1517]: I20250912 23:41:43.895277 1517 update_check_scheduler.cc:74] Next update check in 3m1s Sep 12 23:41:43.905635 systemd[1]: Started update-engine.service - Update Engine. Sep 12 23:41:43.911626 tar[1524]: linux-arm64/helm Sep 12 23:41:43.911652 extend-filesystems[1537]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 23:41:43.911652 extend-filesystems[1537]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 23:41:43.911652 extend-filesystems[1537]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 23:41:43.907093 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 23:41:43.916679 extend-filesystems[1502]: Resized filesystem in /dev/vda9 Sep 12 23:41:43.907118 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 23:41:43.908737 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 23:41:43.908758 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 23:41:43.913379 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 23:41:43.914496 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 23:41:43.914695 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 23:41:43.922161 bash[1558]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:41:43.924799 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 23:41:43.926422 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 23:41:43.939523 systemd-logind[1515]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 23:41:43.939978 systemd-logind[1515]: New seat seat0. Sep 12 23:41:43.940906 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 23:41:43.984340 locksmithd[1559]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 23:41:44.068457 containerd[1542]: time="2025-09-12T23:41:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 23:41:44.069153 containerd[1542]: time="2025-09-12T23:41:44.069105836Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 12 23:41:44.079159 containerd[1542]: time="2025-09-12T23:41:44.079107753Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.305µs" Sep 12 23:41:44.079159 containerd[1542]: time="2025-09-12T23:41:44.079144315Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 23:41:44.079262 containerd[1542]: time="2025-09-12T23:41:44.079173425Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 23:41:44.079441 containerd[1542]: time="2025-09-12T23:41:44.079362290Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 23:41:44.079441 containerd[1542]: time="2025-09-12T23:41:44.079385888Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 23:41:44.079441 containerd[1542]: time="2025-09-12T23:41:44.079412902Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 23:41:44.079514 containerd[1542]: time="2025-09-12T23:41:44.079475197Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 23:41:44.079514 containerd[1542]: time="2025-09-12T23:41:44.079490955Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 23:41:44.079732 containerd[1542]: time="2025-09-12T23:41:44.079679975Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 23:41:44.079732 containerd[1542]: time="2025-09-12T23:41:44.079699149Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 23:41:44.079732 containerd[1542]: time="2025-09-12T23:41:44.079714674Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 23:41:44.079732 containerd[1542]: time="2025-09-12T23:41:44.079725581Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 23:41:44.079861 containerd[1542]: time="2025-09-12T23:41:44.079793620Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 23:41:44.080093 containerd[1542]: time="2025-09-12T23:41:44.079960051Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 23:41:44.080093 containerd[1542]: time="2025-09-12T23:41:44.080009150Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 23:41:44.080093 containerd[1542]: time="2025-09-12T23:41:44.080022036Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 23:41:44.080093 containerd[1542]: time="2025-09-12T23:41:44.080048894Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 23:41:44.080359 containerd[1542]: time="2025-09-12T23:41:44.080336966Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 23:41:44.080431 containerd[1542]: time="2025-09-12T23:41:44.080411720Z" level=info msg="metadata content store policy set" policy=shared Sep 12 23:41:44.081701 sshd_keygen[1525]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 23:41:44.083742 containerd[1542]: time="2025-09-12T23:41:44.083707461Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 23:41:44.083785 containerd[1542]: time="2025-09-12T23:41:44.083762848Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 23:41:44.083785 containerd[1542]: time="2025-09-12T23:41:44.083782681Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 23:41:44.083817 containerd[1542]: time="2025-09-12T23:41:44.083794053Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 23:41:44.083850 containerd[1542]: time="2025-09-12T23:41:44.083805542Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 23:41:44.083867 containerd[1542]: time="2025-09-12T23:41:44.083853632Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 23:41:44.083884 containerd[1542]: time="2025-09-12T23:41:44.083865625Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 23:41:44.083884 containerd[1542]: time="2025-09-12T23:41:44.083877424Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 23:41:44.083915 containerd[1542]: time="2025-09-12T23:41:44.083888952Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 23:41:44.083915 containerd[1542]: time="2025-09-12T23:41:44.083903545Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 23:41:44.083915 containerd[1542]: time="2025-09-12T23:41:44.083913055Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 23:41:44.083962 containerd[1542]: time="2025-09-12T23:41:44.083924543Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 23:41:44.085249 containerd[1542]: time="2025-09-12T23:41:44.084023788Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 23:41:44.085249 containerd[1542]: time="2025-09-12T23:41:44.084047969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 23:41:44.085249 containerd[1542]: time="2025-09-12T23:41:44.084061981Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 23:41:44.085249 containerd[1542]: time="2025-09-12T23:41:44.084071412Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 23:41:44.085249 containerd[1542]: time="2025-09-12T23:41:44.084080456Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 23:41:44.085249 containerd[1542]: time="2025-09-12T23:41:44.084090120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 23:41:44.085249 containerd[1542]: time="2025-09-12T23:41:44.084099785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 23:41:44.085249 containerd[1542]: time="2025-09-12T23:41:44.084109410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 23:41:44.085249 containerd[1542]: time="2025-09-12T23:41:44.084119502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 23:41:44.085249 containerd[1542]: time="2025-09-12T23:41:44.084128972Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 23:41:44.085249 containerd[1542]: time="2025-09-12T23:41:44.084138248Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 23:41:44.085249 containerd[1542]: time="2025-09-12T23:41:44.084436877Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 23:41:44.085249 containerd[1542]: time="2025-09-12T23:41:44.084464706Z" level=info msg="Start snapshots syncer" Sep 12 23:41:44.085249 containerd[1542]: time="2025-09-12T23:41:44.084492069Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 23:41:44.085486 containerd[1542]: time="2025-09-12T23:41:44.084887420Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 23:41:44.085486 containerd[1542]: time="2025-09-12T23:41:44.084934656Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 23:41:44.085577 containerd[1542]: time="2025-09-12T23:41:44.085005567Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 23:41:44.085577 containerd[1542]: time="2025-09-12T23:41:44.085184263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 23:41:44.085577 containerd[1542]: time="2025-09-12T23:41:44.085207823Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 23:41:44.085577 containerd[1542]: time="2025-09-12T23:41:44.085218457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 23:41:44.085577 containerd[1542]: time="2025-09-12T23:41:44.085279705Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 23:41:44.085577 containerd[1542]: time="2025-09-12T23:41:44.085300664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 23:41:44.085577 containerd[1542]: time="2025-09-12T23:41:44.085315801Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 23:41:44.085577 containerd[1542]: time="2025-09-12T23:41:44.085325504Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 23:41:44.085577 containerd[1542]: time="2025-09-12T23:41:44.085533116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 23:41:44.085577 containerd[1542]: time="2025-09-12T23:41:44.085559897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 23:41:44.085577 containerd[1542]: time="2025-09-12T23:41:44.085571618Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 23:41:44.085742 containerd[1542]: time="2025-09-12T23:41:44.085612877Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 23:41:44.085742 containerd[1542]: time="2025-09-12T23:41:44.085627432Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 23:41:44.085742 containerd[1542]: time="2025-09-12T23:41:44.085635776Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 23:41:44.085742 containerd[1542]: time="2025-09-12T23:41:44.085644432Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 23:41:44.085742 containerd[1542]: time="2025-09-12T23:41:44.085651146Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 23:41:44.085742 containerd[1542]: time="2025-09-12T23:41:44.085659452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 23:41:44.085742 containerd[1542]: time="2025-09-12T23:41:44.085675443Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 23:41:44.085848 containerd[1542]: time="2025-09-12T23:41:44.085747093Z" level=info msg="runtime interface created" Sep 12 23:41:44.085848 containerd[1542]: time="2025-09-12T23:41:44.085752216Z" level=info msg="created NRI interface" Sep 12 23:41:44.085848 containerd[1542]: time="2025-09-12T23:41:44.085760755Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 23:41:44.085848 containerd[1542]: time="2025-09-12T23:41:44.085770963Z" level=info msg="Connect containerd service" Sep 12 23:41:44.085848 containerd[1542]: time="2025-09-12T23:41:44.085795881Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 23:41:44.086512 containerd[1542]: time="2025-09-12T23:41:44.086473326Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:41:44.099413 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 23:41:44.104969 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 23:41:44.117566 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 23:41:44.117781 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 23:41:44.120082 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 23:41:44.149552 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 23:41:44.154496 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 23:41:44.156167 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 23:41:44.157364 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 23:41:44.164825 containerd[1542]: time="2025-09-12T23:41:44.164790085Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 23:41:44.164887 containerd[1542]: time="2025-09-12T23:41:44.164851720Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 23:41:44.164906 containerd[1542]: time="2025-09-12T23:41:44.164874581Z" level=info msg="Start subscribing containerd event" Sep 12 23:41:44.164924 containerd[1542]: time="2025-09-12T23:41:44.164912191Z" level=info msg="Start recovering state" Sep 12 23:41:44.164995 containerd[1542]: time="2025-09-12T23:41:44.164980191Z" level=info msg="Start event monitor" Sep 12 23:41:44.165038 containerd[1542]: time="2025-09-12T23:41:44.164997813Z" level=info msg="Start cni network conf syncer for default" Sep 12 23:41:44.165038 containerd[1542]: time="2025-09-12T23:41:44.165004915Z" level=info msg="Start streaming server" Sep 12 23:41:44.165038 containerd[1542]: time="2025-09-12T23:41:44.165012290Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 23:41:44.165038 containerd[1542]: time="2025-09-12T23:41:44.165018384Z" level=info msg="runtime interface starting up..." Sep 12 23:41:44.165038 containerd[1542]: time="2025-09-12T23:41:44.165023507Z" level=info msg="starting plugins..." Sep 12 23:41:44.165038 containerd[1542]: time="2025-09-12T23:41:44.165036005Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 23:41:44.165192 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 23:41:44.166089 containerd[1542]: time="2025-09-12T23:41:44.166064670Z" level=info msg="containerd successfully booted in 0.097974s" Sep 12 23:41:44.299907 tar[1524]: linux-arm64/README.md Sep 12 23:41:44.315120 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 23:41:45.406377 systemd-networkd[1422]: eth0: Gained IPv6LL Sep 12 23:41:45.410281 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 23:41:45.411669 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 23:41:45.413885 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 23:41:45.415920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:41:45.417711 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 23:41:45.437813 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 23:41:45.437994 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 23:41:45.439362 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 23:41:45.440718 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 23:41:45.955296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:41:45.956485 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 23:41:45.957767 systemd[1]: Startup finished in 1.989s (kernel) + 5.672s (initrd) + 3.736s (userspace) = 11.398s. Sep 12 23:41:45.959044 (kubelet)[1631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:41:46.320681 kubelet[1631]: E0912 23:41:46.319761 1631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:41:46.323585 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:41:46.323720 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:41:46.324003 systemd[1]: kubelet.service: Consumed 765ms CPU time, 258.4M memory peak. Sep 12 23:41:49.043530 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 23:41:49.044583 systemd[1]: Started sshd@0-10.0.0.74:22-10.0.0.1:42342.service - OpenSSH per-connection server daemon (10.0.0.1:42342). Sep 12 23:41:49.094427 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 42342 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:41:49.095769 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:41:49.101331 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 23:41:49.102146 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 23:41:49.107047 systemd-logind[1515]: New session 1 of user core. Sep 12 23:41:49.126364 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 23:41:49.128638 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 23:41:49.147882 (systemd)[1648]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 23:41:49.150047 systemd-logind[1515]: New session c1 of user core. Sep 12 23:41:49.249293 systemd[1648]: Queued start job for default target default.target. Sep 12 23:41:49.267185 systemd[1648]: Created slice app.slice - User Application Slice. Sep 12 23:41:49.267213 systemd[1648]: Reached target paths.target - Paths. Sep 12 23:41:49.267271 systemd[1648]: Reached target timers.target - Timers. Sep 12 23:41:49.268417 systemd[1648]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 23:41:49.276971 systemd[1648]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 23:41:49.277027 systemd[1648]: Reached target sockets.target - Sockets. Sep 12 23:41:49.277063 systemd[1648]: Reached target basic.target - Basic System. Sep 12 23:41:49.277089 systemd[1648]: Reached target default.target - Main User Target. Sep 12 23:41:49.277115 systemd[1648]: Startup finished in 122ms. Sep 12 23:41:49.277294 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 23:41:49.278625 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 23:41:49.336184 systemd[1]: Started sshd@1-10.0.0.74:22-10.0.0.1:42346.service - OpenSSH per-connection server daemon (10.0.0.1:42346). Sep 12 23:41:49.393018 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 42346 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:41:49.393799 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:41:49.398449 systemd-logind[1515]: New session 2 of user core. Sep 12 23:41:49.413401 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 23:41:49.463050 sshd[1661]: Connection closed by 10.0.0.1 port 42346 Sep 12 23:41:49.463515 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Sep 12 23:41:49.475168 systemd[1]: sshd@1-10.0.0.74:22-10.0.0.1:42346.service: Deactivated successfully. Sep 12 23:41:49.476533 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 23:41:49.479290 systemd-logind[1515]: Session 2 logged out. Waiting for processes to exit. Sep 12 23:41:49.481319 systemd[1]: Started sshd@2-10.0.0.74:22-10.0.0.1:42360.service - OpenSSH per-connection server daemon (10.0.0.1:42360). Sep 12 23:41:49.482703 systemd-logind[1515]: Removed session 2. Sep 12 23:41:49.527310 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 42360 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:41:49.528505 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:41:49.532862 systemd-logind[1515]: New session 3 of user core. Sep 12 23:41:49.546407 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 23:41:49.593865 sshd[1669]: Connection closed by 10.0.0.1 port 42360 Sep 12 23:41:49.594163 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Sep 12 23:41:49.608058 systemd[1]: sshd@2-10.0.0.74:22-10.0.0.1:42360.service: Deactivated successfully. Sep 12 23:41:49.610385 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 23:41:49.610948 systemd-logind[1515]: Session 3 logged out. Waiting for processes to exit. Sep 12 23:41:49.613306 systemd[1]: Started sshd@3-10.0.0.74:22-10.0.0.1:42364.service - OpenSSH per-connection server daemon (10.0.0.1:42364). Sep 12 23:41:49.613726 systemd-logind[1515]: Removed session 3. Sep 12 23:41:49.666014 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 42364 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:41:49.667091 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:41:49.671286 systemd-logind[1515]: New session 4 of user core. Sep 12 23:41:49.683386 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 23:41:49.733796 sshd[1678]: Connection closed by 10.0.0.1 port 42364 Sep 12 23:41:49.734124 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Sep 12 23:41:49.744965 systemd[1]: sshd@3-10.0.0.74:22-10.0.0.1:42364.service: Deactivated successfully. Sep 12 23:41:49.747382 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 23:41:49.748511 systemd-logind[1515]: Session 4 logged out. Waiting for processes to exit. Sep 12 23:41:49.750163 systemd[1]: Started sshd@4-10.0.0.74:22-10.0.0.1:42372.service - OpenSSH per-connection server daemon (10.0.0.1:42372). Sep 12 23:41:49.750938 systemd-logind[1515]: Removed session 4. Sep 12 23:41:49.798629 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 42372 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:41:49.799755 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:41:49.803449 systemd-logind[1515]: New session 5 of user core. Sep 12 23:41:49.814375 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 23:41:49.868857 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 23:41:49.869160 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:41:49.883718 sudo[1687]: pam_unix(sudo:session): session closed for user root Sep 12 23:41:49.885018 sshd[1686]: Connection closed by 10.0.0.1 port 42372 Sep 12 23:41:49.885540 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Sep 12 23:41:49.894158 systemd[1]: sshd@4-10.0.0.74:22-10.0.0.1:42372.service: Deactivated successfully. Sep 12 23:41:49.896565 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 23:41:49.897305 systemd-logind[1515]: Session 5 logged out. Waiting for processes to exit. Sep 12 23:41:49.899405 systemd[1]: Started sshd@5-10.0.0.74:22-10.0.0.1:42376.service - OpenSSH per-connection server daemon (10.0.0.1:42376). Sep 12 23:41:49.900195 systemd-logind[1515]: Removed session 5. Sep 12 23:41:49.942481 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 42376 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:41:49.943605 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:41:49.947656 systemd-logind[1515]: New session 6 of user core. Sep 12 23:41:49.957364 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 23:41:50.005736 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 23:41:50.005996 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:41:50.048964 sudo[1697]: pam_unix(sudo:session): session closed for user root Sep 12 23:41:50.053684 sudo[1696]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 23:41:50.054170 sudo[1696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:41:50.062210 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 23:41:50.101532 augenrules[1719]: No rules Sep 12 23:41:50.102554 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:41:50.102748 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 23:41:50.104395 sudo[1696]: pam_unix(sudo:session): session closed for user root Sep 12 23:41:50.105858 sshd[1695]: Connection closed by 10.0.0.1 port 42376 Sep 12 23:41:50.105749 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Sep 12 23:41:50.120049 systemd[1]: sshd@5-10.0.0.74:22-10.0.0.1:42376.service: Deactivated successfully. Sep 12 23:41:50.122019 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 23:41:50.122765 systemd-logind[1515]: Session 6 logged out. Waiting for processes to exit. Sep 12 23:41:50.124870 systemd[1]: Started sshd@6-10.0.0.74:22-10.0.0.1:47150.service - OpenSSH per-connection server daemon (10.0.0.1:47150). Sep 12 23:41:50.125526 systemd-logind[1515]: Removed session 6. Sep 12 23:41:50.175367 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 47150 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:41:50.176370 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:41:50.180285 systemd-logind[1515]: New session 7 of user core. Sep 12 23:41:50.196355 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 23:41:50.245325 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 23:41:50.245562 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:41:50.533356 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 23:41:50.549505 (dockerd)[1752]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 23:41:50.759152 dockerd[1752]: time="2025-09-12T23:41:50.759092561Z" level=info msg="Starting up" Sep 12 23:41:50.759980 dockerd[1752]: time="2025-09-12T23:41:50.759960566Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 23:41:50.782486 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2061906808-merged.mount: Deactivated successfully. Sep 12 23:41:50.913190 dockerd[1752]: time="2025-09-12T23:41:50.913087214Z" level=info msg="Loading containers: start." Sep 12 23:41:50.921252 kernel: Initializing XFRM netlink socket Sep 12 23:41:51.101641 systemd-networkd[1422]: docker0: Link UP Sep 12 23:41:51.104627 dockerd[1752]: time="2025-09-12T23:41:51.104590741Z" level=info msg="Loading containers: done." Sep 12 23:41:51.116398 dockerd[1752]: time="2025-09-12T23:41:51.116337734Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 23:41:51.116514 dockerd[1752]: time="2025-09-12T23:41:51.116451157Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 12 23:41:51.116620 dockerd[1752]: time="2025-09-12T23:41:51.116599489Z" level=info msg="Initializing buildkit" Sep 12 23:41:51.137213 dockerd[1752]: time="2025-09-12T23:41:51.137131871Z" level=info msg="Completed buildkit initialization" Sep 12 23:41:51.141667 dockerd[1752]: time="2025-09-12T23:41:51.141634706Z" level=info msg="Daemon has completed initialization" Sep 12 23:41:51.141783 dockerd[1752]: time="2025-09-12T23:41:51.141748485Z" level=info msg="API listen on /run/docker.sock" Sep 12 23:41:51.141868 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 23:41:51.780035 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck358740582-merged.mount: Deactivated successfully. Sep 12 23:41:51.864532 containerd[1542]: time="2025-09-12T23:41:51.864496469Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 23:41:52.366250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount883364382.mount: Deactivated successfully. Sep 12 23:41:53.428354 containerd[1542]: time="2025-09-12T23:41:53.428309824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:41:53.429011 containerd[1542]: time="2025-09-12T23:41:53.428982450Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Sep 12 23:41:53.429639 containerd[1542]: time="2025-09-12T23:41:53.429612816Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:41:53.432817 containerd[1542]: time="2025-09-12T23:41:53.432426370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:41:53.433426 containerd[1542]: time="2025-09-12T23:41:53.433391483Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.568855616s" Sep 12 23:41:53.433523 containerd[1542]: time="2025-09-12T23:41:53.433497608Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 12 23:41:53.435100 containerd[1542]: time="2025-09-12T23:41:53.435064544Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 23:41:54.587180 containerd[1542]: time="2025-09-12T23:41:54.586302023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:41:54.587180 containerd[1542]: time="2025-09-12T23:41:54.586962706Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Sep 12 23:41:54.587662 containerd[1542]: time="2025-09-12T23:41:54.587632953Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:41:54.590297 containerd[1542]: time="2025-09-12T23:41:54.590250321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:41:54.591189 containerd[1542]: time="2025-09-12T23:41:54.591146749Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.156048104s" Sep 12 23:41:54.591189 containerd[1542]: time="2025-09-12T23:41:54.591182825Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 12 23:41:54.591879 containerd[1542]: time="2025-09-12T23:41:54.591628757Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 23:41:55.629066 containerd[1542]: time="2025-09-12T23:41:55.629013756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:41:55.630589 containerd[1542]: time="2025-09-12T23:41:55.630538232Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Sep 12 23:41:55.631313 containerd[1542]: time="2025-09-12T23:41:55.631270397Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:41:55.634365 containerd[1542]: time="2025-09-12T23:41:55.634329321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:41:55.635384 containerd[1542]: time="2025-09-12T23:41:55.635355424Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.043691536s" Sep 12 23:41:55.635384 containerd[1542]: time="2025-09-12T23:41:55.635388239Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 12 23:41:55.635786 containerd[1542]: time="2025-09-12T23:41:55.635763697Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 23:41:56.571293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1907775898.mount: Deactivated successfully. Sep 12 23:41:56.572353 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 23:41:56.573566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:41:56.715089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:41:56.724630 (kubelet)[2047]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:41:56.762953 kubelet[2047]: E0912 23:41:56.762893 2047 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:41:56.766580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:41:56.766867 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:41:56.767222 systemd[1]: kubelet.service: Consumed 140ms CPU time, 107.4M memory peak. Sep 12 23:41:57.061149 containerd[1542]: time="2025-09-12T23:41:57.061035900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:41:57.062144 containerd[1542]: time="2025-09-12T23:41:57.062115743Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Sep 12 23:41:57.063229 containerd[1542]: time="2025-09-12T23:41:57.063177482Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:41:57.064757 containerd[1542]: time="2025-09-12T23:41:57.064728091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:41:57.065598 containerd[1542]: time="2025-09-12T23:41:57.065146293Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.429352349s" Sep 12 23:41:57.065598 containerd[1542]: time="2025-09-12T23:41:57.065178881Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 12 23:41:57.065717 containerd[1542]: time="2025-09-12T23:41:57.065695963Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 23:41:57.638196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3141467289.mount: Deactivated successfully. Sep 12 23:41:58.422567 containerd[1542]: time="2025-09-12T23:41:58.422518511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:41:58.423091 containerd[1542]: time="2025-09-12T23:41:58.423061495Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 12 23:41:58.424085 containerd[1542]: time="2025-09-12T23:41:58.424062494Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:41:58.431134 containerd[1542]: time="2025-09-12T23:41:58.431097238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:41:58.432958 containerd[1542]: time="2025-09-12T23:41:58.432920696Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.367197114s" Sep 12 23:41:58.433006 containerd[1542]: time="2025-09-12T23:41:58.432956094Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 12 23:41:58.433415 containerd[1542]: time="2025-09-12T23:41:58.433387033Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 23:41:58.862787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1499467798.mount: Deactivated successfully. Sep 12 23:41:58.868285 containerd[1542]: time="2025-09-12T23:41:58.868226125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:41:58.869229 containerd[1542]: time="2025-09-12T23:41:58.869201402Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 12 23:41:58.870112 containerd[1542]: time="2025-09-12T23:41:58.870090197Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:41:58.871967 containerd[1542]: time="2025-09-12T23:41:58.871919030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:41:58.872714 containerd[1542]: time="2025-09-12T23:41:58.872691479Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 439.274146ms" Sep 12 23:41:58.872767 containerd[1542]: time="2025-09-12T23:41:58.872720148Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 23:41:58.873118 containerd[1542]: time="2025-09-12T23:41:58.873093750Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 23:41:59.277510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3791339872.mount: Deactivated successfully. Sep 12 23:42:00.806814 containerd[1542]: time="2025-09-12T23:42:00.806763121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:42:00.807637 containerd[1542]: time="2025-09-12T23:42:00.807609824Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Sep 12 23:42:00.808342 containerd[1542]: time="2025-09-12T23:42:00.808309684Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:42:00.811498 containerd[1542]: time="2025-09-12T23:42:00.810911896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:42:00.812066 containerd[1542]: time="2025-09-12T23:42:00.812045271Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 1.938922884s" Sep 12 23:42:00.812107 containerd[1542]: time="2025-09-12T23:42:00.812072257Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 12 23:42:05.759834 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:42:05.759976 systemd[1]: kubelet.service: Consumed 140ms CPU time, 107.4M memory peak. Sep 12 23:42:05.761819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:42:05.782183 systemd[1]: Reload requested from client PID 2199 ('systemctl') (unit session-7.scope)... Sep 12 23:42:05.782201 systemd[1]: Reloading... Sep 12 23:42:05.853310 zram_generator::config[2243]: No configuration found. Sep 12 23:42:05.961092 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:42:06.045069 systemd[1]: Reloading finished in 262 ms. Sep 12 23:42:06.121677 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 23:42:06.121757 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 23:42:06.121990 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:42:06.122046 systemd[1]: kubelet.service: Consumed 86ms CPU time, 95M memory peak. Sep 12 23:42:06.124461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:42:06.230037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:42:06.241511 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:42:06.273089 kubelet[2288]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:42:06.273089 kubelet[2288]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 23:42:06.273089 kubelet[2288]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:42:06.273432 kubelet[2288]: I0912 23:42:06.273122 2288 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:42:07.581150 kubelet[2288]: I0912 23:42:07.581100 2288 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 23:42:07.581150 kubelet[2288]: I0912 23:42:07.581134 2288 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:42:07.581496 kubelet[2288]: I0912 23:42:07.581358 2288 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 23:42:07.604116 kubelet[2288]: E0912 23:42:07.604069 2288 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 23:42:07.605023 kubelet[2288]: I0912 23:42:07.604993 2288 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:42:07.618696 kubelet[2288]: I0912 23:42:07.618660 2288 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 23:42:07.623599 kubelet[2288]: I0912 23:42:07.623517 2288 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:42:07.623917 kubelet[2288]: I0912 23:42:07.623876 2288 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:42:07.624048 kubelet[2288]: I0912 23:42:07.623907 2288 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:42:07.624138 kubelet[2288]: I0912 23:42:07.624117 2288 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:42:07.624138 kubelet[2288]: I0912 23:42:07.624126 2288 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 23:42:07.624355 kubelet[2288]: I0912 23:42:07.624331 2288 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:42:07.626940 kubelet[2288]: I0912 23:42:07.626905 2288 kubelet.go:480] "Attempting to sync node with API server" Sep 12 23:42:07.627638 kubelet[2288]: I0912 23:42:07.627601 2288 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:42:07.627669 kubelet[2288]: I0912 23:42:07.627650 2288 kubelet.go:386] "Adding apiserver pod source" Sep 12 23:42:07.629260 kubelet[2288]: I0912 23:42:07.629141 2288 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:42:07.630994 kubelet[2288]: I0912 23:42:07.630969 2288 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 12 23:42:07.631681 kubelet[2288]: I0912 23:42:07.631642 2288 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 23:42:07.631785 kubelet[2288]: W0912 23:42:07.631762 2288 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 23:42:07.635017 kubelet[2288]: E0912 23:42:07.634856 2288 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 23:42:07.635585 kubelet[2288]: E0912 23:42:07.635330 2288 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 23:42:07.635585 kubelet[2288]: I0912 23:42:07.635516 2288 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 23:42:07.635707 kubelet[2288]: I0912 23:42:07.635609 2288 server.go:1289] "Started kubelet" Sep 12 23:42:07.636738 kubelet[2288]: I0912 23:42:07.636716 2288 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:42:07.639527 kubelet[2288]: I0912 23:42:07.639453 2288 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:42:07.639775 kubelet[2288]: I0912 23:42:07.639746 2288 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:42:07.639818 kubelet[2288]: I0912 23:42:07.639802 2288 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:42:07.640649 kubelet[2288]: I0912 23:42:07.640630 2288 server.go:317] "Adding debug handlers to kubelet server" Sep 12 23:42:07.642994 kubelet[2288]: E0912 23:42:07.640099 2288 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.74:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.74:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864ad81d02194c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 23:42:07.635584194 +0000 UTC m=+1.390506742,LastTimestamp:2025-09-12 23:42:07.635584194 +0000 UTC m=+1.390506742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 23:42:07.643411 kubelet[2288]: I0912 23:42:07.643394 2288 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 23:42:07.643959 kubelet[2288]: E0912 23:42:07.643926 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:42:07.644005 kubelet[2288]: I0912 23:42:07.643989 2288 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 23:42:07.644056 kubelet[2288]: I0912 23:42:07.644046 2288 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:42:07.644757 kubelet[2288]: E0912 23:42:07.644724 2288 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 23:42:07.645267 kubelet[2288]: E0912 23:42:07.644969 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="200ms" Sep 12 23:42:07.645267 kubelet[2288]: I0912 23:42:07.645081 2288 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:42:07.645880 kubelet[2288]: I0912 23:42:07.645861 2288 factory.go:223] Registration of the systemd container factory successfully Sep 12 23:42:07.645951 kubelet[2288]: I0912 23:42:07.645937 2288 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:42:07.648551 kubelet[2288]: E0912 23:42:07.648531 2288 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:42:07.648799 kubelet[2288]: I0912 23:42:07.648784 2288 factory.go:223] Registration of the containerd container factory successfully Sep 12 23:42:07.659418 kubelet[2288]: I0912 23:42:07.659394 2288 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 23:42:07.659418 kubelet[2288]: I0912 23:42:07.659411 2288 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 23:42:07.659418 kubelet[2288]: I0912 23:42:07.659427 2288 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:42:07.744185 kubelet[2288]: E0912 23:42:07.744125 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:42:07.751549 kubelet[2288]: I0912 23:42:07.751318 2288 policy_none.go:49] "None policy: Start" Sep 12 23:42:07.751549 kubelet[2288]: I0912 23:42:07.751352 2288 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 23:42:07.751549 kubelet[2288]: I0912 23:42:07.751364 2288 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:42:07.756664 kubelet[2288]: I0912 23:42:07.756567 2288 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 23:42:07.758128 kubelet[2288]: I0912 23:42:07.758107 2288 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 23:42:07.758128 kubelet[2288]: I0912 23:42:07.758129 2288 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 23:42:07.758208 kubelet[2288]: I0912 23:42:07.758155 2288 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 23:42:07.758208 kubelet[2288]: I0912 23:42:07.758164 2288 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 23:42:07.758208 kubelet[2288]: E0912 23:42:07.758199 2288 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:42:07.758735 kubelet[2288]: E0912 23:42:07.758669 2288 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 23:42:07.761325 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 23:42:07.781821 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 23:42:07.786019 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 23:42:07.808008 kubelet[2288]: E0912 23:42:07.807982 2288 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 23:42:07.808360 kubelet[2288]: I0912 23:42:07.808298 2288 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:42:07.808360 kubelet[2288]: I0912 23:42:07.808314 2288 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:42:07.808990 kubelet[2288]: I0912 23:42:07.808967 2288 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:42:07.810077 kubelet[2288]: E0912 23:42:07.810055 2288 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 23:42:07.810171 kubelet[2288]: E0912 23:42:07.810159 2288 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 23:42:07.845835 kubelet[2288]: E0912 23:42:07.845734 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="400ms" Sep 12 23:42:07.869801 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 12 23:42:07.899805 kubelet[2288]: E0912 23:42:07.899765 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:42:07.903692 systemd[1]: Created slice kubepods-burstable-pod2a06f881f6cae1ef0fdae47e51b3b990.slice - libcontainer container kubepods-burstable-pod2a06f881f6cae1ef0fdae47e51b3b990.slice. Sep 12 23:42:07.905892 kubelet[2288]: E0912 23:42:07.905868 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:42:07.909941 kubelet[2288]: I0912 23:42:07.909916 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:42:07.910379 kubelet[2288]: E0912 23:42:07.910354 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Sep 12 23:42:07.922971 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 12 23:42:07.924844 kubelet[2288]: E0912 23:42:07.924675 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:42:08.045340 kubelet[2288]: I0912 23:42:08.045299 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a06f881f6cae1ef0fdae47e51b3b990-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2a06f881f6cae1ef0fdae47e51b3b990\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:42:08.045340 kubelet[2288]: I0912 23:42:08.045337 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a06f881f6cae1ef0fdae47e51b3b990-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2a06f881f6cae1ef0fdae47e51b3b990\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:42:08.045454 kubelet[2288]: I0912 23:42:08.045361 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:42:08.045454 kubelet[2288]: I0912 23:42:08.045376 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:42:08.045454 kubelet[2288]: I0912 23:42:08.045393 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:42:08.045537 kubelet[2288]: I0912 23:42:08.045447 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a06f881f6cae1ef0fdae47e51b3b990-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2a06f881f6cae1ef0fdae47e51b3b990\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:42:08.045537 kubelet[2288]: I0912 23:42:08.045482 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:42:08.045537 kubelet[2288]: I0912 23:42:08.045500 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:42:08.045537 kubelet[2288]: I0912 23:42:08.045519 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 23:42:08.111698 kubelet[2288]: I0912 23:42:08.111583 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:42:08.111940 kubelet[2288]: E0912 23:42:08.111905 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Sep 12 23:42:08.201576 containerd[1542]: time="2025-09-12T23:42:08.201464669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 12 23:42:08.207078 containerd[1542]: time="2025-09-12T23:42:08.207029363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2a06f881f6cae1ef0fdae47e51b3b990,Namespace:kube-system,Attempt:0,}" Sep 12 23:42:08.222798 containerd[1542]: time="2025-09-12T23:42:08.222736793Z" level=info msg="connecting to shim 625523f75145e2b5cb3aba776c46ab5b613273f13d9f0e1f71c6fa8abbed31ca" address="unix:///run/containerd/s/caa8ad46ac989fbed8093b0673775940f81ab7dad2fa95861ec63863caf32e28" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:42:08.227263 containerd[1542]: time="2025-09-12T23:42:08.226445483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 12 23:42:08.234708 containerd[1542]: time="2025-09-12T23:42:08.234671529Z" level=info msg="connecting to shim 40a3125c769bcfb8a87d3e35d447ccba89eb0bb5b33f78c0ffd2fe62ebe5490d" address="unix:///run/containerd/s/0f3a59ce61726ff9220abd8d9d48a70b51f707da95f5b31816fa86e8ebced001" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:42:08.246523 kubelet[2288]: E0912 23:42:08.246469 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="800ms" Sep 12 23:42:08.261609 containerd[1542]: time="2025-09-12T23:42:08.260428208Z" level=info msg="connecting to shim dde6bf78417dddb9abd1d1f5a5617ea91e55ae853d8efe21a333462f60695e57" address="unix:///run/containerd/s/c812b74cdbb37614736de909f881b2baeca5b057c4ccbef67507d547dce7ad76" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:42:08.261422 systemd[1]: Started cri-containerd-625523f75145e2b5cb3aba776c46ab5b613273f13d9f0e1f71c6fa8abbed31ca.scope - libcontainer container 625523f75145e2b5cb3aba776c46ab5b613273f13d9f0e1f71c6fa8abbed31ca. Sep 12 23:42:08.264846 systemd[1]: Started cri-containerd-40a3125c769bcfb8a87d3e35d447ccba89eb0bb5b33f78c0ffd2fe62ebe5490d.scope - libcontainer container 40a3125c769bcfb8a87d3e35d447ccba89eb0bb5b33f78c0ffd2fe62ebe5490d. Sep 12 23:42:08.291415 systemd[1]: Started cri-containerd-dde6bf78417dddb9abd1d1f5a5617ea91e55ae853d8efe21a333462f60695e57.scope - libcontainer container dde6bf78417dddb9abd1d1f5a5617ea91e55ae853d8efe21a333462f60695e57. Sep 12 23:42:08.304946 containerd[1542]: time="2025-09-12T23:42:08.304842121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2a06f881f6cae1ef0fdae47e51b3b990,Namespace:kube-system,Attempt:0,} returns sandbox id \"40a3125c769bcfb8a87d3e35d447ccba89eb0bb5b33f78c0ffd2fe62ebe5490d\"" Sep 12 23:42:08.306090 containerd[1542]: time="2025-09-12T23:42:08.306030929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"625523f75145e2b5cb3aba776c46ab5b613273f13d9f0e1f71c6fa8abbed31ca\"" Sep 12 23:42:08.309726 containerd[1542]: time="2025-09-12T23:42:08.309646531Z" level=info msg="CreateContainer within sandbox \"40a3125c769bcfb8a87d3e35d447ccba89eb0bb5b33f78c0ffd2fe62ebe5490d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 23:42:08.312039 containerd[1542]: time="2025-09-12T23:42:08.311904210Z" level=info msg="CreateContainer within sandbox \"625523f75145e2b5cb3aba776c46ab5b613273f13d9f0e1f71c6fa8abbed31ca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 23:42:08.317127 containerd[1542]: time="2025-09-12T23:42:08.317088283Z" level=info msg="Container 85817d1ca87dabba7e56aa30f4f9170851c2953b4a690bf8c5b4471ed8ec44dd: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:42:08.321563 containerd[1542]: time="2025-09-12T23:42:08.321526454Z" level=info msg="Container ca808682f352e3eb124c972c74ab56adb2d0c28ac6533f08f11d6d635d2d3191: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:42:08.327271 containerd[1542]: time="2025-09-12T23:42:08.327144004Z" level=info msg="CreateContainer within sandbox \"40a3125c769bcfb8a87d3e35d447ccba89eb0bb5b33f78c0ffd2fe62ebe5490d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"85817d1ca87dabba7e56aa30f4f9170851c2953b4a690bf8c5b4471ed8ec44dd\"" Sep 12 23:42:08.328715 containerd[1542]: time="2025-09-12T23:42:08.328677875Z" level=info msg="StartContainer for \"85817d1ca87dabba7e56aa30f4f9170851c2953b4a690bf8c5b4471ed8ec44dd\"" Sep 12 23:42:08.329841 containerd[1542]: time="2025-09-12T23:42:08.329804477Z" level=info msg="connecting to shim 85817d1ca87dabba7e56aa30f4f9170851c2953b4a690bf8c5b4471ed8ec44dd" address="unix:///run/containerd/s/0f3a59ce61726ff9220abd8d9d48a70b51f707da95f5b31816fa86e8ebced001" protocol=ttrpc version=3 Sep 12 23:42:08.331327 containerd[1542]: time="2025-09-12T23:42:08.331285173Z" level=info msg="CreateContainer within sandbox \"625523f75145e2b5cb3aba776c46ab5b613273f13d9f0e1f71c6fa8abbed31ca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ca808682f352e3eb124c972c74ab56adb2d0c28ac6533f08f11d6d635d2d3191\"" Sep 12 23:42:08.332183 containerd[1542]: time="2025-09-12T23:42:08.332087766Z" level=info msg="StartContainer for \"ca808682f352e3eb124c972c74ab56adb2d0c28ac6533f08f11d6d635d2d3191\"" Sep 12 23:42:08.333295 containerd[1542]: time="2025-09-12T23:42:08.333267464Z" level=info msg="connecting to shim ca808682f352e3eb124c972c74ab56adb2d0c28ac6533f08f11d6d635d2d3191" address="unix:///run/containerd/s/caa8ad46ac989fbed8093b0673775940f81ab7dad2fa95861ec63863caf32e28" protocol=ttrpc version=3 Sep 12 23:42:08.336941 containerd[1542]: time="2025-09-12T23:42:08.336900525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"dde6bf78417dddb9abd1d1f5a5617ea91e55ae853d8efe21a333462f60695e57\"" Sep 12 23:42:08.342243 containerd[1542]: time="2025-09-12T23:42:08.342207330Z" level=info msg="CreateContainer within sandbox \"dde6bf78417dddb9abd1d1f5a5617ea91e55ae853d8efe21a333462f60695e57\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 23:42:08.349478 containerd[1542]: time="2025-09-12T23:42:08.349437017Z" level=info msg="Container b7e022fd195cd23a8141b50f2f47b0422a5d76f11eed05f6e64ce00da685d685: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:42:08.351392 systemd[1]: Started cri-containerd-85817d1ca87dabba7e56aa30f4f9170851c2953b4a690bf8c5b4471ed8ec44dd.scope - libcontainer container 85817d1ca87dabba7e56aa30f4f9170851c2953b4a690bf8c5b4471ed8ec44dd. Sep 12 23:42:08.355087 systemd[1]: Started cri-containerd-ca808682f352e3eb124c972c74ab56adb2d0c28ac6533f08f11d6d635d2d3191.scope - libcontainer container ca808682f352e3eb124c972c74ab56adb2d0c28ac6533f08f11d6d635d2d3191. Sep 12 23:42:08.357639 containerd[1542]: time="2025-09-12T23:42:08.357605412Z" level=info msg="CreateContainer within sandbox \"dde6bf78417dddb9abd1d1f5a5617ea91e55ae853d8efe21a333462f60695e57\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b7e022fd195cd23a8141b50f2f47b0422a5d76f11eed05f6e64ce00da685d685\"" Sep 12 23:42:08.358083 containerd[1542]: time="2025-09-12T23:42:08.358060943Z" level=info msg="StartContainer for \"b7e022fd195cd23a8141b50f2f47b0422a5d76f11eed05f6e64ce00da685d685\"" Sep 12 23:42:08.359555 containerd[1542]: time="2025-09-12T23:42:08.359522302Z" level=info msg="connecting to shim b7e022fd195cd23a8141b50f2f47b0422a5d76f11eed05f6e64ce00da685d685" address="unix:///run/containerd/s/c812b74cdbb37614736de909f881b2baeca5b057c4ccbef67507d547dce7ad76" protocol=ttrpc version=3 Sep 12 23:42:08.385647 systemd[1]: Started cri-containerd-b7e022fd195cd23a8141b50f2f47b0422a5d76f11eed05f6e64ce00da685d685.scope - libcontainer container b7e022fd195cd23a8141b50f2f47b0422a5d76f11eed05f6e64ce00da685d685. Sep 12 23:42:08.403840 containerd[1542]: time="2025-09-12T23:42:08.403695985Z" level=info msg="StartContainer for \"85817d1ca87dabba7e56aa30f4f9170851c2953b4a690bf8c5b4471ed8ec44dd\" returns successfully" Sep 12 23:42:08.411257 containerd[1542]: time="2025-09-12T23:42:08.410993311Z" level=info msg="StartContainer for \"ca808682f352e3eb124c972c74ab56adb2d0c28ac6533f08f11d6d635d2d3191\" returns successfully" Sep 12 23:42:08.442530 containerd[1542]: time="2025-09-12T23:42:08.442492788Z" level=info msg="StartContainer for \"b7e022fd195cd23a8141b50f2f47b0422a5d76f11eed05f6e64ce00da685d685\" returns successfully" Sep 12 23:42:08.513382 kubelet[2288]: I0912 23:42:08.513348 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:42:08.771141 kubelet[2288]: E0912 23:42:08.770866 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:42:08.772776 kubelet[2288]: E0912 23:42:08.772759 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:42:08.776574 kubelet[2288]: E0912 23:42:08.776547 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:42:09.779701 kubelet[2288]: E0912 23:42:09.779663 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:42:09.782880 kubelet[2288]: E0912 23:42:09.782850 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:42:10.196593 kubelet[2288]: E0912 23:42:10.196553 2288 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 23:42:10.277166 kubelet[2288]: I0912 23:42:10.277112 2288 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 23:42:10.345054 kubelet[2288]: I0912 23:42:10.345015 2288 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 23:42:10.351378 kubelet[2288]: E0912 23:42:10.349962 2288 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 23:42:10.351378 kubelet[2288]: I0912 23:42:10.349986 2288 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:42:10.351822 kubelet[2288]: E0912 23:42:10.351791 2288 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:42:10.351978 kubelet[2288]: I0912 23:42:10.351898 2288 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 23:42:10.353579 kubelet[2288]: E0912 23:42:10.353543 2288 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 23:42:10.630670 kubelet[2288]: I0912 23:42:10.630631 2288 apiserver.go:52] "Watching apiserver" Sep 12 23:42:10.644522 kubelet[2288]: I0912 23:42:10.644483 2288 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 23:42:11.462909 kubelet[2288]: I0912 23:42:11.462878 2288 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 23:42:12.178293 systemd[1]: Reload requested from client PID 2570 ('systemctl') (unit session-7.scope)... Sep 12 23:42:12.178307 systemd[1]: Reloading... Sep 12 23:42:12.243293 zram_generator::config[2616]: No configuration found. Sep 12 23:42:12.393178 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:42:12.488611 systemd[1]: Reloading finished in 310 ms. Sep 12 23:42:12.517683 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:42:12.526286 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 23:42:12.526519 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:42:12.526571 systemd[1]: kubelet.service: Consumed 1.750s CPU time, 130.6M memory peak. Sep 12 23:42:12.528084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:42:12.655881 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:42:12.659040 (kubelet)[2655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:42:12.698850 kubelet[2655]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:42:12.698850 kubelet[2655]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 23:42:12.698850 kubelet[2655]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:42:12.699162 kubelet[2655]: I0912 23:42:12.698899 2655 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:42:12.703843 kubelet[2655]: I0912 23:42:12.703805 2655 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 23:42:12.703843 kubelet[2655]: I0912 23:42:12.703830 2655 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:42:12.704025 kubelet[2655]: I0912 23:42:12.703996 2655 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 23:42:12.705121 kubelet[2655]: I0912 23:42:12.705105 2655 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 23:42:12.707302 kubelet[2655]: I0912 23:42:12.707176 2655 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:42:12.711483 kubelet[2655]: I0912 23:42:12.711434 2655 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 23:42:12.713973 kubelet[2655]: I0912 23:42:12.713916 2655 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:42:12.714148 kubelet[2655]: I0912 23:42:12.714115 2655 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:42:12.714286 kubelet[2655]: I0912 23:42:12.714138 2655 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:42:12.714397 kubelet[2655]: I0912 23:42:12.714296 2655 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:42:12.714397 kubelet[2655]: I0912 23:42:12.714304 2655 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 23:42:12.714397 kubelet[2655]: I0912 23:42:12.714340 2655 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:42:12.714491 kubelet[2655]: I0912 23:42:12.714476 2655 kubelet.go:480] "Attempting to sync node with API server" Sep 12 23:42:12.714491 kubelet[2655]: I0912 23:42:12.714491 2655 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:42:12.714536 kubelet[2655]: I0912 23:42:12.714510 2655 kubelet.go:386] "Adding apiserver pod source" Sep 12 23:42:12.714536 kubelet[2655]: I0912 23:42:12.714523 2655 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:42:12.715783 kubelet[2655]: I0912 23:42:12.715708 2655 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 12 23:42:12.716278 kubelet[2655]: I0912 23:42:12.716257 2655 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 23:42:12.718691 kubelet[2655]: I0912 23:42:12.718673 2655 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 23:42:12.718760 kubelet[2655]: I0912 23:42:12.718708 2655 server.go:1289] "Started kubelet" Sep 12 23:42:12.720060 kubelet[2655]: I0912 23:42:12.720032 2655 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:42:12.726842 kubelet[2655]: I0912 23:42:12.726800 2655 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:42:12.727728 kubelet[2655]: I0912 23:42:12.727687 2655 server.go:317] "Adding debug handlers to kubelet server" Sep 12 23:42:12.730331 kubelet[2655]: I0912 23:42:12.730272 2655 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:42:12.732474 kubelet[2655]: I0912 23:42:12.732383 2655 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:42:12.732810 kubelet[2655]: I0912 23:42:12.732787 2655 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:42:12.732862 kubelet[2655]: E0912 23:42:12.730620 2655 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:42:12.732888 kubelet[2655]: I0912 23:42:12.730421 2655 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 23:42:12.733271 kubelet[2655]: I0912 23:42:12.733209 2655 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:42:12.739299 kubelet[2655]: I0912 23:42:12.738528 2655 factory.go:223] Registration of the systemd container factory successfully Sep 12 23:42:12.740063 kubelet[2655]: I0912 23:42:12.740021 2655 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:42:12.740848 kubelet[2655]: I0912 23:42:12.730438 2655 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 23:42:12.742269 kubelet[2655]: I0912 23:42:12.742055 2655 factory.go:223] Registration of the containerd container factory successfully Sep 12 23:42:12.742329 kubelet[2655]: E0912 23:42:12.742272 2655 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:42:12.745072 kubelet[2655]: I0912 23:42:12.744950 2655 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 23:42:12.745969 kubelet[2655]: I0912 23:42:12.745950 2655 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 23:42:12.746068 kubelet[2655]: I0912 23:42:12.746047 2655 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 23:42:12.746151 kubelet[2655]: I0912 23:42:12.746132 2655 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 23:42:12.746201 kubelet[2655]: I0912 23:42:12.746193 2655 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 23:42:12.746297 kubelet[2655]: E0912 23:42:12.746281 2655 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:42:12.772156 kubelet[2655]: I0912 23:42:12.772134 2655 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 23:42:12.772156 kubelet[2655]: I0912 23:42:12.772151 2655 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 23:42:12.772281 kubelet[2655]: I0912 23:42:12.772171 2655 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:42:12.772335 kubelet[2655]: I0912 23:42:12.772320 2655 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 23:42:12.772358 kubelet[2655]: I0912 23:42:12.772335 2655 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 23:42:12.772358 kubelet[2655]: I0912 23:42:12.772351 2655 policy_none.go:49] "None policy: Start" Sep 12 23:42:12.772394 kubelet[2655]: I0912 23:42:12.772359 2655 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 23:42:12.772394 kubelet[2655]: I0912 23:42:12.772369 2655 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:42:12.772454 kubelet[2655]: I0912 23:42:12.772444 2655 state_mem.go:75] "Updated machine memory state" Sep 12 23:42:12.775775 kubelet[2655]: E0912 23:42:12.775747 2655 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 23:42:12.775920 kubelet[2655]: I0912 23:42:12.775899 2655 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:42:12.775947 kubelet[2655]: I0912 23:42:12.775916 2655 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:42:12.776680 kubelet[2655]: I0912 23:42:12.776622 2655 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:42:12.777005 kubelet[2655]: E0912 23:42:12.776985 2655 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 23:42:12.847477 kubelet[2655]: I0912 23:42:12.847407 2655 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:42:12.847477 kubelet[2655]: I0912 23:42:12.847436 2655 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 23:42:12.848445 kubelet[2655]: I0912 23:42:12.848200 2655 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 23:42:12.853596 kubelet[2655]: E0912 23:42:12.853563 2655 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 23:42:12.880489 kubelet[2655]: I0912 23:42:12.880274 2655 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:42:12.887963 kubelet[2655]: I0912 23:42:12.887934 2655 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 23:42:12.888114 kubelet[2655]: I0912 23:42:12.888006 2655 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 23:42:13.034828 kubelet[2655]: I0912 23:42:13.034730 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a06f881f6cae1ef0fdae47e51b3b990-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2a06f881f6cae1ef0fdae47e51b3b990\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:42:13.034828 kubelet[2655]: I0912 23:42:13.034764 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a06f881f6cae1ef0fdae47e51b3b990-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2a06f881f6cae1ef0fdae47e51b3b990\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:42:13.034828 kubelet[2655]: I0912 23:42:13.034786 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a06f881f6cae1ef0fdae47e51b3b990-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2a06f881f6cae1ef0fdae47e51b3b990\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:42:13.034828 kubelet[2655]: I0912 23:42:13.034807 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:42:13.034828 kubelet[2655]: I0912 23:42:13.034825 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:42:13.035038 kubelet[2655]: I0912 23:42:13.034841 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:42:13.035038 kubelet[2655]: I0912 23:42:13.034857 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 23:42:13.035038 kubelet[2655]: I0912 23:42:13.034875 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:42:13.035038 kubelet[2655]: I0912 23:42:13.034890 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:42:13.182104 sudo[2693]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 23:42:13.182407 sudo[2693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 23:42:13.621687 sudo[2693]: pam_unix(sudo:session): session closed for user root Sep 12 23:42:13.715295 kubelet[2655]: I0912 23:42:13.714892 2655 apiserver.go:52] "Watching apiserver" Sep 12 23:42:13.741469 kubelet[2655]: I0912 23:42:13.741421 2655 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 23:42:13.761021 kubelet[2655]: I0912 23:42:13.760834 2655 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 23:42:13.761021 kubelet[2655]: I0912 23:42:13.760899 2655 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 23:42:13.765471 kubelet[2655]: E0912 23:42:13.765387 2655 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 23:42:13.765713 kubelet[2655]: E0912 23:42:13.765644 2655 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 23:42:13.782769 kubelet[2655]: I0912 23:42:13.782544 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.782530602 podStartE2EDuration="1.782530602s" podCreationTimestamp="2025-09-12 23:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:42:13.782465922 +0000 UTC m=+1.120271534" watchObservedRunningTime="2025-09-12 23:42:13.782530602 +0000 UTC m=+1.120336174" Sep 12 23:42:13.795795 kubelet[2655]: I0912 23:42:13.795749 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7957354840000002 podStartE2EDuration="1.795735484s" podCreationTimestamp="2025-09-12 23:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:42:13.789649364 +0000 UTC m=+1.127454936" watchObservedRunningTime="2025-09-12 23:42:13.795735484 +0000 UTC m=+1.133541056" Sep 12 23:42:13.795898 kubelet[2655]: I0912 23:42:13.795863 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.795858728 podStartE2EDuration="2.795858728s" podCreationTimestamp="2025-09-12 23:42:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:42:13.795829946 +0000 UTC m=+1.133635518" watchObservedRunningTime="2025-09-12 23:42:13.795858728 +0000 UTC m=+1.133664300" Sep 12 23:42:14.965121 sudo[1731]: pam_unix(sudo:session): session closed for user root Sep 12 23:42:14.966347 sshd[1730]: Connection closed by 10.0.0.1 port 47150 Sep 12 23:42:14.966849 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Sep 12 23:42:14.970184 systemd[1]: sshd@6-10.0.0.74:22-10.0.0.1:47150.service: Deactivated successfully. Sep 12 23:42:14.972115 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 23:42:14.972353 systemd[1]: session-7.scope: Consumed 6.573s CPU time, 266M memory peak. Sep 12 23:42:14.973209 systemd-logind[1515]: Session 7 logged out. Waiting for processes to exit. Sep 12 23:42:14.974654 systemd-logind[1515]: Removed session 7. Sep 12 23:42:17.597877 kubelet[2655]: I0912 23:42:17.597545 2655 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 23:42:17.599561 containerd[1542]: time="2025-09-12T23:42:17.599477738Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 23:42:17.601427 kubelet[2655]: I0912 23:42:17.601400 2655 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 23:42:18.207218 systemd[1]: Created slice kubepods-besteffort-pod53e32a8c_bf71_4ba7_a67f_e85ccacd601c.slice - libcontainer container kubepods-besteffort-pod53e32a8c_bf71_4ba7_a67f_e85ccacd601c.slice. Sep 12 23:42:18.223039 systemd[1]: Created slice kubepods-burstable-pod052a3dd4_2cc2_4d9e_a3b6_1f630d554c88.slice - libcontainer container kubepods-burstable-pod052a3dd4_2cc2_4d9e_a3b6_1f630d554c88.slice. Sep 12 23:42:18.370133 kubelet[2655]: I0912 23:42:18.370063 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53e32a8c-bf71-4ba7-a67f-e85ccacd601c-xtables-lock\") pod \"kube-proxy-9rx9z\" (UID: \"53e32a8c-bf71-4ba7-a67f-e85ccacd601c\") " pod="kube-system/kube-proxy-9rx9z" Sep 12 23:42:18.370133 kubelet[2655]: I0912 23:42:18.370104 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53e32a8c-bf71-4ba7-a67f-e85ccacd601c-lib-modules\") pod \"kube-proxy-9rx9z\" (UID: \"53e32a8c-bf71-4ba7-a67f-e85ccacd601c\") " pod="kube-system/kube-proxy-9rx9z" Sep 12 23:42:18.370316 kubelet[2655]: I0912 23:42:18.370151 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-bpf-maps\") pod \"cilium-7jq7n\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " pod="kube-system/cilium-7jq7n" Sep 12 23:42:18.370316 kubelet[2655]: I0912 23:42:18.370169 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-hostproc\") pod \"cilium-7jq7n\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " pod="kube-system/cilium-7jq7n" Sep 12 23:42:18.370316 kubelet[2655]: I0912 23:42:18.370184 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-cilium-cgroup\") pod \"cilium-7jq7n\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " pod="kube-system/cilium-7jq7n" Sep 12 23:42:18.370316 kubelet[2655]: I0912 23:42:18.370228 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-hubble-tls\") pod \"cilium-7jq7n\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " pod="kube-system/cilium-7jq7n" Sep 12 23:42:18.370316 kubelet[2655]: I0912 23:42:18.370286 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/53e32a8c-bf71-4ba7-a67f-e85ccacd601c-kube-proxy\") pod \"kube-proxy-9rx9z\" (UID: \"53e32a8c-bf71-4ba7-a67f-e85ccacd601c\") " pod="kube-system/kube-proxy-9rx9z" Sep 12 23:42:18.370438 kubelet[2655]: I0912 23:42:18.370324 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-etc-cni-netd\") pod \"cilium-7jq7n\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " pod="kube-system/cilium-7jq7n" Sep 12 23:42:18.370438 kubelet[2655]: I0912 23:42:18.370360 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-xtables-lock\") pod \"cilium-7jq7n\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " pod="kube-system/cilium-7jq7n" Sep 12 23:42:18.370438 kubelet[2655]: I0912 23:42:18.370376 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-clustermesh-secrets\") pod \"cilium-7jq7n\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " pod="kube-system/cilium-7jq7n" Sep 12 23:42:18.370438 kubelet[2655]: I0912 23:42:18.370392 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-cilium-config-path\") pod \"cilium-7jq7n\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " pod="kube-system/cilium-7jq7n" Sep 12 23:42:18.370438 kubelet[2655]: I0912 23:42:18.370411 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66xdn\" (UniqueName: \"kubernetes.io/projected/53e32a8c-bf71-4ba7-a67f-e85ccacd601c-kube-api-access-66xdn\") pod \"kube-proxy-9rx9z\" (UID: \"53e32a8c-bf71-4ba7-a67f-e85ccacd601c\") " pod="kube-system/kube-proxy-9rx9z" Sep 12 23:42:18.370579 kubelet[2655]: I0912 23:42:18.370429 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-cni-path\") pod \"cilium-7jq7n\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " pod="kube-system/cilium-7jq7n" Sep 12 23:42:18.370579 kubelet[2655]: I0912 23:42:18.370444 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-lib-modules\") pod \"cilium-7jq7n\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " pod="kube-system/cilium-7jq7n" Sep 12 23:42:18.370579 kubelet[2655]: I0912 23:42:18.370463 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-cilium-run\") pod \"cilium-7jq7n\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " pod="kube-system/cilium-7jq7n" Sep 12 23:42:18.370579 kubelet[2655]: I0912 23:42:18.370476 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-host-proc-sys-net\") pod \"cilium-7jq7n\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " pod="kube-system/cilium-7jq7n" Sep 12 23:42:18.370579 kubelet[2655]: I0912 23:42:18.370489 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-host-proc-sys-kernel\") pod \"cilium-7jq7n\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " pod="kube-system/cilium-7jq7n" Sep 12 23:42:18.370579 kubelet[2655]: I0912 23:42:18.370502 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdrh5\" (UniqueName: \"kubernetes.io/projected/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-kube-api-access-tdrh5\") pod \"cilium-7jq7n\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " pod="kube-system/cilium-7jq7n" Sep 12 23:42:18.486509 kubelet[2655]: E0912 23:42:18.486004 2655 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 12 23:42:18.486509 kubelet[2655]: E0912 23:42:18.486037 2655 projected.go:194] Error preparing data for projected volume kube-api-access-tdrh5 for pod kube-system/cilium-7jq7n: configmap "kube-root-ca.crt" not found Sep 12 23:42:18.486509 kubelet[2655]: E0912 23:42:18.486098 2655 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-kube-api-access-tdrh5 podName:052a3dd4-2cc2-4d9e-a3b6-1f630d554c88 nodeName:}" failed. No retries permitted until 2025-09-12 23:42:18.98607642 +0000 UTC m=+6.323881992 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tdrh5" (UniqueName: "kubernetes.io/projected/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-kube-api-access-tdrh5") pod "cilium-7jq7n" (UID: "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88") : configmap "kube-root-ca.crt" not found Sep 12 23:42:18.487332 kubelet[2655]: E0912 23:42:18.487274 2655 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 12 23:42:18.487332 kubelet[2655]: E0912 23:42:18.487295 2655 projected.go:194] Error preparing data for projected volume kube-api-access-66xdn for pod kube-system/kube-proxy-9rx9z: configmap "kube-root-ca.crt" not found Sep 12 23:42:18.487470 kubelet[2655]: E0912 23:42:18.487444 2655 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/53e32a8c-bf71-4ba7-a67f-e85ccacd601c-kube-api-access-66xdn podName:53e32a8c-bf71-4ba7-a67f-e85ccacd601c nodeName:}" failed. No retries permitted until 2025-09-12 23:42:18.987427893 +0000 UTC m=+6.325233465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-66xdn" (UniqueName: "kubernetes.io/projected/53e32a8c-bf71-4ba7-a67f-e85ccacd601c-kube-api-access-66xdn") pod "kube-proxy-9rx9z" (UID: "53e32a8c-bf71-4ba7-a67f-e85ccacd601c") : configmap "kube-root-ca.crt" not found Sep 12 23:42:18.876614 kubelet[2655]: I0912 23:42:18.876576 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5cec8e2-406c-4e4c-847e-768029ca7270-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-g4fk9\" (UID: \"d5cec8e2-406c-4e4c-847e-768029ca7270\") " pod="kube-system/cilium-operator-6c4d7847fc-g4fk9" Sep 12 23:42:18.878563 systemd[1]: Created slice kubepods-besteffort-podd5cec8e2_406c_4e4c_847e_768029ca7270.slice - libcontainer container kubepods-besteffort-podd5cec8e2_406c_4e4c_847e_768029ca7270.slice. Sep 12 23:42:18.977756 kubelet[2655]: I0912 23:42:18.977695 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wmh7\" (UniqueName: \"kubernetes.io/projected/d5cec8e2-406c-4e4c-847e-768029ca7270-kube-api-access-4wmh7\") pod \"cilium-operator-6c4d7847fc-g4fk9\" (UID: \"d5cec8e2-406c-4e4c-847e-768029ca7270\") " pod="kube-system/cilium-operator-6c4d7847fc-g4fk9" Sep 12 23:42:19.121442 containerd[1542]: time="2025-09-12T23:42:19.121393442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rx9z,Uid:53e32a8c-bf71-4ba7-a67f-e85ccacd601c,Namespace:kube-system,Attempt:0,}" Sep 12 23:42:19.126290 containerd[1542]: time="2025-09-12T23:42:19.126211703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7jq7n,Uid:052a3dd4-2cc2-4d9e-a3b6-1f630d554c88,Namespace:kube-system,Attempt:0,}" Sep 12 23:42:19.138039 containerd[1542]: time="2025-09-12T23:42:19.137822061Z" level=info msg="connecting to shim d88bae6a0f4657addb8707e270f17a30221ff9f339753d99f7fd5029dc926955" address="unix:///run/containerd/s/ab6793d66583e1304c2de3cf68b0d188e8639cd3ba41d2d7032e1246e24bd828" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:42:19.145052 containerd[1542]: time="2025-09-12T23:42:19.145009549Z" level=info msg="connecting to shim ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd" address="unix:///run/containerd/s/414a53dc58f24f617225827780e05ed80ba29eba0245b9ed61f22ef5ffc305db" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:42:19.166411 systemd[1]: Started cri-containerd-d88bae6a0f4657addb8707e270f17a30221ff9f339753d99f7fd5029dc926955.scope - libcontainer container d88bae6a0f4657addb8707e270f17a30221ff9f339753d99f7fd5029dc926955. Sep 12 23:42:19.169377 systemd[1]: Started cri-containerd-ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd.scope - libcontainer container ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd. Sep 12 23:42:19.184160 containerd[1542]: time="2025-09-12T23:42:19.183433907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g4fk9,Uid:d5cec8e2-406c-4e4c-847e-768029ca7270,Namespace:kube-system,Attempt:0,}" Sep 12 23:42:19.204509 containerd[1542]: time="2025-09-12T23:42:19.204462449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rx9z,Uid:53e32a8c-bf71-4ba7-a67f-e85ccacd601c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d88bae6a0f4657addb8707e270f17a30221ff9f339753d99f7fd5029dc926955\"" Sep 12 23:42:19.220521 containerd[1542]: time="2025-09-12T23:42:19.220474995Z" level=info msg="CreateContainer within sandbox \"d88bae6a0f4657addb8707e270f17a30221ff9f339753d99f7fd5029dc926955\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 23:42:19.226002 containerd[1542]: time="2025-09-12T23:42:19.225956708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7jq7n,Uid:052a3dd4-2cc2-4d9e-a3b6-1f630d554c88,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd\"" Sep 12 23:42:19.227335 containerd[1542]: time="2025-09-12T23:42:19.227301015Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 23:42:19.230994 containerd[1542]: time="2025-09-12T23:42:19.230954824Z" level=info msg="Container fdfb125ebd2ef90ba8447f993ec681e5806175740a681a2579b8a8970ac32dcc: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:42:19.255750 containerd[1542]: time="2025-09-12T23:42:19.255707220Z" level=info msg="CreateContainer within sandbox \"d88bae6a0f4657addb8707e270f17a30221ff9f339753d99f7fd5029dc926955\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fdfb125ebd2ef90ba8447f993ec681e5806175740a681a2579b8a8970ac32dcc\"" Sep 12 23:42:19.258097 containerd[1542]: time="2025-09-12T23:42:19.258066527Z" level=info msg="StartContainer for \"fdfb125ebd2ef90ba8447f993ec681e5806175740a681a2579b8a8970ac32dcc\"" Sep 12 23:42:19.258756 containerd[1542]: time="2025-09-12T23:42:19.258729139Z" level=info msg="connecting to shim 35b8a8cb342a819677d0c221f7aa6f6773ccd764f6cf771733497cbba2c00323" address="unix:///run/containerd/s/73fe292184c3b6b6ec32842088a3a5823ab1b10dd84d7a339ffb01916f471882" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:42:19.260083 containerd[1542]: time="2025-09-12T23:42:19.260050244Z" level=info msg="connecting to shim fdfb125ebd2ef90ba8447f993ec681e5806175740a681a2579b8a8970ac32dcc" address="unix:///run/containerd/s/ab6793d66583e1304c2de3cf68b0d188e8639cd3ba41d2d7032e1246e24bd828" protocol=ttrpc version=3 Sep 12 23:42:19.295432 systemd[1]: Started cri-containerd-35b8a8cb342a819677d0c221f7aa6f6773ccd764f6cf771733497cbba2c00323.scope - libcontainer container 35b8a8cb342a819677d0c221f7aa6f6773ccd764f6cf771733497cbba2c00323. Sep 12 23:42:19.296439 systemd[1]: Started cri-containerd-fdfb125ebd2ef90ba8447f993ec681e5806175740a681a2579b8a8970ac32dcc.scope - libcontainer container fdfb125ebd2ef90ba8447f993ec681e5806175740a681a2579b8a8970ac32dcc. Sep 12 23:42:19.331712 containerd[1542]: time="2025-09-12T23:42:19.331608301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g4fk9,Uid:d5cec8e2-406c-4e4c-847e-768029ca7270,Namespace:kube-system,Attempt:0,} returns sandbox id \"35b8a8cb342a819677d0c221f7aa6f6773ccd764f6cf771733497cbba2c00323\"" Sep 12 23:42:19.337857 containerd[1542]: time="2025-09-12T23:42:19.337825192Z" level=info msg="StartContainer for \"fdfb125ebd2ef90ba8447f993ec681e5806175740a681a2579b8a8970ac32dcc\" returns successfully" Sep 12 23:42:20.682126 kubelet[2655]: I0912 23:42:20.682073 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9rx9z" podStartSLOduration=2.682057383 podStartE2EDuration="2.682057383s" podCreationTimestamp="2025-09-12 23:42:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:42:19.781861896 +0000 UTC m=+7.119667468" watchObservedRunningTime="2025-09-12 23:42:20.682057383 +0000 UTC m=+8.019862955" Sep 12 23:42:25.972575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2870255849.mount: Deactivated successfully. Sep 12 23:42:27.283161 containerd[1542]: time="2025-09-12T23:42:27.283103205Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:42:27.284364 containerd[1542]: time="2025-09-12T23:42:27.284159019Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 12 23:42:27.285067 containerd[1542]: time="2025-09-12T23:42:27.285034584Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:42:27.286463 containerd[1542]: time="2025-09-12T23:42:27.286429416Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.059091478s" Sep 12 23:42:27.286643 containerd[1542]: time="2025-09-12T23:42:27.286560103Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 12 23:42:27.291981 containerd[1542]: time="2025-09-12T23:42:27.291235463Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 23:42:27.296846 containerd[1542]: time="2025-09-12T23:42:27.296803869Z" level=info msg="CreateContainer within sandbox \"ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 23:42:27.334505 containerd[1542]: time="2025-09-12T23:42:27.334467044Z" level=info msg="Container eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:42:27.336076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1930494534.mount: Deactivated successfully. Sep 12 23:42:27.340051 containerd[1542]: time="2025-09-12T23:42:27.339948645Z" level=info msg="CreateContainer within sandbox \"ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841\"" Sep 12 23:42:27.340536 containerd[1542]: time="2025-09-12T23:42:27.340511274Z" level=info msg="StartContainer for \"eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841\"" Sep 12 23:42:27.342387 containerd[1542]: time="2025-09-12T23:42:27.342353089Z" level=info msg="connecting to shim eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841" address="unix:///run/containerd/s/414a53dc58f24f617225827780e05ed80ba29eba0245b9ed61f22ef5ffc305db" protocol=ttrpc version=3 Sep 12 23:42:27.384366 systemd[1]: Started cri-containerd-eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841.scope - libcontainer container eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841. Sep 12 23:42:27.422457 systemd[1]: cri-containerd-eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841.scope: Deactivated successfully. Sep 12 23:42:27.431152 containerd[1542]: time="2025-09-12T23:42:27.431107129Z" level=info msg="StartContainer for \"eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841\" returns successfully" Sep 12 23:42:27.450370 containerd[1542]: time="2025-09-12T23:42:27.450290154Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841\" id:\"eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841\" pid:3083 exited_at:{seconds:1757720547 nanos:440879551}" Sep 12 23:42:27.452575 containerd[1542]: time="2025-09-12T23:42:27.452524589Z" level=info msg="received exit event container_id:\"eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841\" id:\"eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841\" pid:3083 exited_at:{seconds:1757720547 nanos:440879551}" Sep 12 23:42:27.798573 containerd[1542]: time="2025-09-12T23:42:27.798532045Z" level=info msg="CreateContainer within sandbox \"ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 23:42:27.805286 containerd[1542]: time="2025-09-12T23:42:27.805198867Z" level=info msg="Container a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:42:27.811695 containerd[1542]: time="2025-09-12T23:42:27.811637838Z" level=info msg="CreateContainer within sandbox \"ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db\"" Sep 12 23:42:27.812225 containerd[1542]: time="2025-09-12T23:42:27.812199467Z" level=info msg="StartContainer for \"a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db\"" Sep 12 23:42:27.815750 containerd[1542]: time="2025-09-12T23:42:27.815695047Z" level=info msg="connecting to shim a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db" address="unix:///run/containerd/s/414a53dc58f24f617225827780e05ed80ba29eba0245b9ed61f22ef5ffc305db" protocol=ttrpc version=3 Sep 12 23:42:27.852478 systemd[1]: Started cri-containerd-a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db.scope - libcontainer container a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db. Sep 12 23:42:27.877923 containerd[1542]: time="2025-09-12T23:42:27.877883442Z" level=info msg="StartContainer for \"a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db\" returns successfully" Sep 12 23:42:27.889591 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:42:27.889810 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:42:27.890449 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:42:27.891920 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:42:27.893829 systemd[1]: cri-containerd-a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db.scope: Deactivated successfully. Sep 12 23:42:27.895352 containerd[1542]: time="2025-09-12T23:42:27.894290884Z" level=info msg="received exit event container_id:\"a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db\" id:\"a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db\" pid:3129 exited_at:{seconds:1757720547 nanos:894049832}" Sep 12 23:42:27.895352 containerd[1542]: time="2025-09-12T23:42:27.894534377Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db\" id:\"a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db\" pid:3129 exited_at:{seconds:1757720547 nanos:894049832}" Sep 12 23:42:27.926270 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:42:28.304637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841-rootfs.mount: Deactivated successfully. Sep 12 23:42:28.602021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3980610397.mount: Deactivated successfully. Sep 12 23:42:28.807442 containerd[1542]: time="2025-09-12T23:42:28.807402689Z" level=info msg="CreateContainer within sandbox \"ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 23:42:28.820670 containerd[1542]: time="2025-09-12T23:42:28.820596253Z" level=info msg="Container 13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:42:28.866576 containerd[1542]: time="2025-09-12T23:42:28.866156117Z" level=info msg="CreateContainer within sandbox \"ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5\"" Sep 12 23:42:28.868052 containerd[1542]: time="2025-09-12T23:42:28.867034320Z" level=info msg="StartContainer for \"13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5\"" Sep 12 23:42:28.869344 containerd[1542]: time="2025-09-12T23:42:28.869302271Z" level=info msg="connecting to shim 13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5" address="unix:///run/containerd/s/414a53dc58f24f617225827780e05ed80ba29eba0245b9ed61f22ef5ffc305db" protocol=ttrpc version=3 Sep 12 23:42:28.887429 systemd[1]: Started cri-containerd-13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5.scope - libcontainer container 13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5. Sep 12 23:42:28.924364 systemd[1]: cri-containerd-13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5.scope: Deactivated successfully. Sep 12 23:42:28.926654 containerd[1542]: time="2025-09-12T23:42:28.926520064Z" level=info msg="received exit event container_id:\"13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5\" id:\"13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5\" pid:3188 exited_at:{seconds:1757720548 nanos:926352175}" Sep 12 23:42:28.926654 containerd[1542]: time="2025-09-12T23:42:28.926587347Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5\" id:\"13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5\" pid:3188 exited_at:{seconds:1757720548 nanos:926352175}" Sep 12 23:42:28.926751 containerd[1542]: time="2025-09-12T23:42:28.926692912Z" level=info msg="StartContainer for \"13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5\" returns successfully" Sep 12 23:42:29.105188 containerd[1542]: time="2025-09-12T23:42:29.105136131Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:42:29.105553 containerd[1542]: time="2025-09-12T23:42:29.105524789Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 12 23:42:29.106455 containerd[1542]: time="2025-09-12T23:42:29.106402190Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:42:29.107619 containerd[1542]: time="2025-09-12T23:42:29.107584805Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.816277778s" Sep 12 23:42:29.107619 containerd[1542]: time="2025-09-12T23:42:29.107616046Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 12 23:42:29.111539 containerd[1542]: time="2025-09-12T23:42:29.111466585Z" level=info msg="CreateContainer within sandbox \"35b8a8cb342a819677d0c221f7aa6f6773ccd764f6cf771733497cbba2c00323\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 23:42:29.120325 containerd[1542]: time="2025-09-12T23:42:29.120018542Z" level=info msg="Container 4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:42:29.125698 containerd[1542]: time="2025-09-12T23:42:29.125647843Z" level=info msg="CreateContainer within sandbox \"35b8a8cb342a819677d0c221f7aa6f6773ccd764f6cf771733497cbba2c00323\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f\"" Sep 12 23:42:29.126225 containerd[1542]: time="2025-09-12T23:42:29.126111504Z" level=info msg="StartContainer for \"4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f\"" Sep 12 23:42:29.127252 containerd[1542]: time="2025-09-12T23:42:29.127183514Z" level=info msg="connecting to shim 4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f" address="unix:///run/containerd/s/73fe292184c3b6b6ec32842088a3a5823ab1b10dd84d7a339ffb01916f471882" protocol=ttrpc version=3 Sep 12 23:42:29.154428 systemd[1]: Started cri-containerd-4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f.scope - libcontainer container 4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f. Sep 12 23:42:29.214798 containerd[1542]: time="2025-09-12T23:42:29.214757379Z" level=info msg="StartContainer for \"4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f\" returns successfully" Sep 12 23:42:29.305609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2613499075.mount: Deactivated successfully. Sep 12 23:42:29.560023 update_engine[1517]: I20250912 23:42:29.559289 1517 update_attempter.cc:509] Updating boot flags... Sep 12 23:42:29.816467 kubelet[2655]: I0912 23:42:29.816068 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-g4fk9" podStartSLOduration=2.040983571 podStartE2EDuration="11.816052608s" podCreationTimestamp="2025-09-12 23:42:18 +0000 UTC" firstStartedPulling="2025-09-12 23:42:19.333602338 +0000 UTC m=+6.671407870" lastFinishedPulling="2025-09-12 23:42:29.108671335 +0000 UTC m=+16.446476907" observedRunningTime="2025-09-12 23:42:29.814171641 +0000 UTC m=+17.151977213" watchObservedRunningTime="2025-09-12 23:42:29.816052608 +0000 UTC m=+17.153858180" Sep 12 23:42:29.817525 containerd[1542]: time="2025-09-12T23:42:29.817492995Z" level=info msg="CreateContainer within sandbox \"ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 23:42:29.829457 containerd[1542]: time="2025-09-12T23:42:29.829397507Z" level=info msg="Container 127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:42:29.836574 containerd[1542]: time="2025-09-12T23:42:29.836430874Z" level=info msg="CreateContainer within sandbox \"ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d\"" Sep 12 23:42:29.837131 containerd[1542]: time="2025-09-12T23:42:29.837045622Z" level=info msg="StartContainer for \"127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d\"" Sep 12 23:42:29.837984 containerd[1542]: time="2025-09-12T23:42:29.837958105Z" level=info msg="connecting to shim 127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d" address="unix:///run/containerd/s/414a53dc58f24f617225827780e05ed80ba29eba0245b9ed61f22ef5ffc305db" protocol=ttrpc version=3 Sep 12 23:42:29.861425 systemd[1]: Started cri-containerd-127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d.scope - libcontainer container 127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d. Sep 12 23:42:29.892915 systemd[1]: cri-containerd-127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d.scope: Deactivated successfully. Sep 12 23:42:29.896095 containerd[1542]: time="2025-09-12T23:42:29.896030600Z" level=info msg="TaskExit event in podsandbox handler container_id:\"127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d\" id:\"127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d\" pid:3286 exited_at:{seconds:1757720549 nanos:895660743}" Sep 12 23:42:29.896337 containerd[1542]: time="2025-09-12T23:42:29.896296852Z" level=info msg="received exit event container_id:\"127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d\" id:\"127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d\" pid:3286 exited_at:{seconds:1757720549 nanos:895660743}" Sep 12 23:42:29.899167 containerd[1542]: time="2025-09-12T23:42:29.899130344Z" level=info msg="StartContainer for \"127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d\" returns successfully" Sep 12 23:42:29.916896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d-rootfs.mount: Deactivated successfully. Sep 12 23:42:30.819191 containerd[1542]: time="2025-09-12T23:42:30.819144371Z" level=info msg="CreateContainer within sandbox \"ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 23:42:30.829146 containerd[1542]: time="2025-09-12T23:42:30.828978166Z" level=info msg="Container 9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:42:30.833998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3825854768.mount: Deactivated successfully. Sep 12 23:42:30.839084 containerd[1542]: time="2025-09-12T23:42:30.839051011Z" level=info msg="CreateContainer within sandbox \"ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\"" Sep 12 23:42:30.839501 containerd[1542]: time="2025-09-12T23:42:30.839479190Z" level=info msg="StartContainer for \"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\"" Sep 12 23:42:30.840499 containerd[1542]: time="2025-09-12T23:42:30.840477194Z" level=info msg="connecting to shim 9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7" address="unix:///run/containerd/s/414a53dc58f24f617225827780e05ed80ba29eba0245b9ed61f22ef5ffc305db" protocol=ttrpc version=3 Sep 12 23:42:30.862407 systemd[1]: Started cri-containerd-9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7.scope - libcontainer container 9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7. Sep 12 23:42:30.900255 containerd[1542]: time="2025-09-12T23:42:30.900203431Z" level=info msg="StartContainer for \"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\" returns successfully" Sep 12 23:42:30.995773 containerd[1542]: time="2025-09-12T23:42:30.995732730Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\" id:\"e7de9d82046b37a0824d17c5cdcc3935ab4c27db4efd2e6c94b7739a28a00beb\" pid:3353 exited_at:{seconds:1757720550 nanos:995416517}" Sep 12 23:42:31.031341 kubelet[2655]: I0912 23:42:31.031313 2655 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 23:42:31.100457 systemd[1]: Created slice kubepods-burstable-pod3e8b6d93_f3fe_4613_b325_4be8956d3103.slice - libcontainer container kubepods-burstable-pod3e8b6d93_f3fe_4613_b325_4be8956d3103.slice. Sep 12 23:42:31.107221 systemd[1]: Created slice kubepods-burstable-podb2463fd4_aba4_4d47_8e13_1200d4c0cf2c.slice - libcontainer container kubepods-burstable-podb2463fd4_aba4_4d47_8e13_1200d4c0cf2c.slice. Sep 12 23:42:31.276094 kubelet[2655]: I0912 23:42:31.276052 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jhz7\" (UniqueName: \"kubernetes.io/projected/3e8b6d93-f3fe-4613-b325-4be8956d3103-kube-api-access-6jhz7\") pod \"coredns-674b8bbfcf-lxd2d\" (UID: \"3e8b6d93-f3fe-4613-b325-4be8956d3103\") " pod="kube-system/coredns-674b8bbfcf-lxd2d" Sep 12 23:42:31.276094 kubelet[2655]: I0912 23:42:31.276099 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2463fd4-aba4-4d47-8e13-1200d4c0cf2c-config-volume\") pod \"coredns-674b8bbfcf-dhxpp\" (UID: \"b2463fd4-aba4-4d47-8e13-1200d4c0cf2c\") " pod="kube-system/coredns-674b8bbfcf-dhxpp" Sep 12 23:42:31.276259 kubelet[2655]: I0912 23:42:31.276120 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkvxl\" (UniqueName: \"kubernetes.io/projected/b2463fd4-aba4-4d47-8e13-1200d4c0cf2c-kube-api-access-pkvxl\") pod \"coredns-674b8bbfcf-dhxpp\" (UID: \"b2463fd4-aba4-4d47-8e13-1200d4c0cf2c\") " pod="kube-system/coredns-674b8bbfcf-dhxpp" Sep 12 23:42:31.276259 kubelet[2655]: I0912 23:42:31.276137 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e8b6d93-f3fe-4613-b325-4be8956d3103-config-volume\") pod \"coredns-674b8bbfcf-lxd2d\" (UID: \"3e8b6d93-f3fe-4613-b325-4be8956d3103\") " pod="kube-system/coredns-674b8bbfcf-lxd2d" Sep 12 23:42:31.404816 containerd[1542]: time="2025-09-12T23:42:31.404708982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lxd2d,Uid:3e8b6d93-f3fe-4613-b325-4be8956d3103,Namespace:kube-system,Attempt:0,}" Sep 12 23:42:31.411087 containerd[1542]: time="2025-09-12T23:42:31.410821359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dhxpp,Uid:b2463fd4-aba4-4d47-8e13-1200d4c0cf2c,Namespace:kube-system,Attempt:0,}" Sep 12 23:42:31.838135 kubelet[2655]: I0912 23:42:31.837479 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7jq7n" podStartSLOduration=5.773422878 podStartE2EDuration="13.837461661s" podCreationTimestamp="2025-09-12 23:42:18 +0000 UTC" firstStartedPulling="2025-09-12 23:42:19.226961068 +0000 UTC m=+6.564766640" lastFinishedPulling="2025-09-12 23:42:27.290999851 +0000 UTC m=+14.628805423" observedRunningTime="2025-09-12 23:42:31.837251292 +0000 UTC m=+19.175056904" watchObservedRunningTime="2025-09-12 23:42:31.837461661 +0000 UTC m=+19.175267233" Sep 12 23:42:32.936880 systemd-networkd[1422]: cilium_host: Link UP Sep 12 23:42:32.937017 systemd-networkd[1422]: cilium_net: Link UP Sep 12 23:42:32.937177 systemd-networkd[1422]: cilium_net: Gained carrier Sep 12 23:42:32.937338 systemd-networkd[1422]: cilium_host: Gained carrier Sep 12 23:42:33.011212 systemd-networkd[1422]: cilium_vxlan: Link UP Sep 12 23:42:33.011218 systemd-networkd[1422]: cilium_vxlan: Gained carrier Sep 12 23:42:33.259283 kernel: NET: Registered PF_ALG protocol family Sep 12 23:42:33.398386 systemd-networkd[1422]: cilium_host: Gained IPv6LL Sep 12 23:42:33.663465 systemd-networkd[1422]: cilium_net: Gained IPv6LL Sep 12 23:42:33.812426 systemd-networkd[1422]: lxc_health: Link UP Sep 12 23:42:33.812790 systemd-networkd[1422]: lxc_health: Gained carrier Sep 12 23:42:33.959124 systemd-networkd[1422]: lxc63848dff6ade: Link UP Sep 12 23:42:33.960262 kernel: eth0: renamed from tmp44029 Sep 12 23:42:33.960825 systemd-networkd[1422]: lxcae2ae7e557ec: Link UP Sep 12 23:42:33.962300 kernel: eth0: renamed from tmpf5153 Sep 12 23:42:33.961080 systemd-networkd[1422]: lxcae2ae7e557ec: Gained carrier Sep 12 23:42:33.964037 systemd-networkd[1422]: lxc63848dff6ade: Gained carrier Sep 12 23:42:34.494413 systemd-networkd[1422]: cilium_vxlan: Gained IPv6LL Sep 12 23:42:35.135364 systemd-networkd[1422]: lxc63848dff6ade: Gained IPv6LL Sep 12 23:42:35.198395 systemd-networkd[1422]: lxcae2ae7e557ec: Gained IPv6LL Sep 12 23:42:35.838793 systemd-networkd[1422]: lxc_health: Gained IPv6LL Sep 12 23:42:37.451489 containerd[1542]: time="2025-09-12T23:42:37.451402400Z" level=info msg="connecting to shim 44029b5987e7494e662cc2d04de68e8b9fd6ea6cff5d9152b4d849b203833c44" address="unix:///run/containerd/s/69687bfc617792730866f6d7a9445a3ab5526512212a6b9c832ac39f27773bc2" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:42:37.452126 containerd[1542]: time="2025-09-12T23:42:37.452061781Z" level=info msg="connecting to shim f5153359a328c6f246ed16ae24cc553fa4976b35aca5683caffc3aeb200cdcd1" address="unix:///run/containerd/s/23d356a29334199b0da0135756f68b44cae544dd38827161275ef5f92ca2d4d6" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:42:37.469487 systemd[1]: Started cri-containerd-44029b5987e7494e662cc2d04de68e8b9fd6ea6cff5d9152b4d849b203833c44.scope - libcontainer container 44029b5987e7494e662cc2d04de68e8b9fd6ea6cff5d9152b4d849b203833c44. Sep 12 23:42:37.473979 systemd[1]: Started cri-containerd-f5153359a328c6f246ed16ae24cc553fa4976b35aca5683caffc3aeb200cdcd1.scope - libcontainer container f5153359a328c6f246ed16ae24cc553fa4976b35aca5683caffc3aeb200cdcd1. Sep 12 23:42:37.482485 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:42:37.485590 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:42:37.506765 containerd[1542]: time="2025-09-12T23:42:37.506734365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lxd2d,Uid:3e8b6d93-f3fe-4613-b325-4be8956d3103,Namespace:kube-system,Attempt:0,} returns sandbox id \"44029b5987e7494e662cc2d04de68e8b9fd6ea6cff5d9152b4d849b203833c44\"" Sep 12 23:42:37.509754 containerd[1542]: time="2025-09-12T23:42:37.509719740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dhxpp,Uid:b2463fd4-aba4-4d47-8e13-1200d4c0cf2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5153359a328c6f246ed16ae24cc553fa4976b35aca5683caffc3aeb200cdcd1\"" Sep 12 23:42:37.513818 containerd[1542]: time="2025-09-12T23:42:37.513792870Z" level=info msg="CreateContainer within sandbox \"44029b5987e7494e662cc2d04de68e8b9fd6ea6cff5d9152b4d849b203833c44\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:42:37.516299 containerd[1542]: time="2025-09-12T23:42:37.516268949Z" level=info msg="CreateContainer within sandbox \"f5153359a328c6f246ed16ae24cc553fa4976b35aca5683caffc3aeb200cdcd1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:42:37.525037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2690790897.mount: Deactivated successfully. Sep 12 23:42:37.526969 containerd[1542]: time="2025-09-12T23:42:37.526409073Z" level=info msg="Container bd0966e23801ef57355441f46f44fd3ba92d9c51fb41dd27083f946c57ebecac: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:42:37.526969 containerd[1542]: time="2025-09-12T23:42:37.526888768Z" level=info msg="Container 3fc739a05d2aa08f885291e327ae00b956e1a87a5f8fd59987bb21288a047b75: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:42:37.531883 containerd[1542]: time="2025-09-12T23:42:37.531847886Z" level=info msg="CreateContainer within sandbox \"44029b5987e7494e662cc2d04de68e8b9fd6ea6cff5d9152b4d849b203833c44\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bd0966e23801ef57355441f46f44fd3ba92d9c51fb41dd27083f946c57ebecac\"" Sep 12 23:42:37.533756 containerd[1542]: time="2025-09-12T23:42:37.533726746Z" level=info msg="CreateContainer within sandbox \"f5153359a328c6f246ed16ae24cc553fa4976b35aca5683caffc3aeb200cdcd1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3fc739a05d2aa08f885291e327ae00b956e1a87a5f8fd59987bb21288a047b75\"" Sep 12 23:42:37.534406 containerd[1542]: time="2025-09-12T23:42:37.534383527Z" level=info msg="StartContainer for \"bd0966e23801ef57355441f46f44fd3ba92d9c51fb41dd27083f946c57ebecac\"" Sep 12 23:42:37.534587 containerd[1542]: time="2025-09-12T23:42:37.534551813Z" level=info msg="StartContainer for \"3fc739a05d2aa08f885291e327ae00b956e1a87a5f8fd59987bb21288a047b75\"" Sep 12 23:42:37.535272 containerd[1542]: time="2025-09-12T23:42:37.535230074Z" level=info msg="connecting to shim 3fc739a05d2aa08f885291e327ae00b956e1a87a5f8fd59987bb21288a047b75" address="unix:///run/containerd/s/23d356a29334199b0da0135756f68b44cae544dd38827161275ef5f92ca2d4d6" protocol=ttrpc version=3 Sep 12 23:42:37.536077 containerd[1542]: time="2025-09-12T23:42:37.536036220Z" level=info msg="connecting to shim bd0966e23801ef57355441f46f44fd3ba92d9c51fb41dd27083f946c57ebecac" address="unix:///run/containerd/s/69687bfc617792730866f6d7a9445a3ab5526512212a6b9c832ac39f27773bc2" protocol=ttrpc version=3 Sep 12 23:42:37.565389 systemd[1]: Started cri-containerd-3fc739a05d2aa08f885291e327ae00b956e1a87a5f8fd59987bb21288a047b75.scope - libcontainer container 3fc739a05d2aa08f885291e327ae00b956e1a87a5f8fd59987bb21288a047b75. Sep 12 23:42:37.566450 systemd[1]: Started cri-containerd-bd0966e23801ef57355441f46f44fd3ba92d9c51fb41dd27083f946c57ebecac.scope - libcontainer container bd0966e23801ef57355441f46f44fd3ba92d9c51fb41dd27083f946c57ebecac. Sep 12 23:42:37.597559 containerd[1542]: time="2025-09-12T23:42:37.597453659Z" level=info msg="StartContainer for \"bd0966e23801ef57355441f46f44fd3ba92d9c51fb41dd27083f946c57ebecac\" returns successfully" Sep 12 23:42:37.603838 containerd[1542]: time="2025-09-12T23:42:37.603797781Z" level=info msg="StartContainer for \"3fc739a05d2aa08f885291e327ae00b956e1a87a5f8fd59987bb21288a047b75\" returns successfully" Sep 12 23:42:37.860192 kubelet[2655]: I0912 23:42:37.860133 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lxd2d" podStartSLOduration=19.860118677 podStartE2EDuration="19.860118677s" podCreationTimestamp="2025-09-12 23:42:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:42:37.859857109 +0000 UTC m=+25.197662681" watchObservedRunningTime="2025-09-12 23:42:37.860118677 +0000 UTC m=+25.197924249" Sep 12 23:42:37.870797 kubelet[2655]: I0912 23:42:37.870534 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dhxpp" podStartSLOduration=19.870518409 podStartE2EDuration="19.870518409s" podCreationTimestamp="2025-09-12 23:42:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:42:37.870102115 +0000 UTC m=+25.207907687" watchObservedRunningTime="2025-09-12 23:42:37.870518409 +0000 UTC m=+25.208323941" Sep 12 23:42:39.488781 systemd[1]: Started sshd@7-10.0.0.74:22-10.0.0.1:49370.service - OpenSSH per-connection server daemon (10.0.0.1:49370). Sep 12 23:42:39.532229 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 49370 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:42:39.533468 sshd-session[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:42:39.537175 systemd-logind[1515]: New session 8 of user core. Sep 12 23:42:39.547429 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 23:42:39.671792 sshd[4000]: Connection closed by 10.0.0.1 port 49370 Sep 12 23:42:39.671350 sshd-session[3998]: pam_unix(sshd:session): session closed for user core Sep 12 23:42:39.674631 systemd[1]: sshd@7-10.0.0.74:22-10.0.0.1:49370.service: Deactivated successfully. Sep 12 23:42:39.677144 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 23:42:39.677890 systemd-logind[1515]: Session 8 logged out. Waiting for processes to exit. Sep 12 23:42:39.679097 systemd-logind[1515]: Removed session 8. Sep 12 23:42:44.686020 systemd[1]: Started sshd@8-10.0.0.74:22-10.0.0.1:37404.service - OpenSSH per-connection server daemon (10.0.0.1:37404). Sep 12 23:42:44.731675 sshd[4017]: Accepted publickey for core from 10.0.0.1 port 37404 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:42:44.732855 sshd-session[4017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:42:44.736763 systemd-logind[1515]: New session 9 of user core. Sep 12 23:42:44.747479 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 23:42:44.853810 sshd[4019]: Connection closed by 10.0.0.1 port 37404 Sep 12 23:42:44.854353 sshd-session[4017]: pam_unix(sshd:session): session closed for user core Sep 12 23:42:44.857690 systemd[1]: sshd@8-10.0.0.74:22-10.0.0.1:37404.service: Deactivated successfully. Sep 12 23:42:44.860702 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 23:42:44.861475 systemd-logind[1515]: Session 9 logged out. Waiting for processes to exit. Sep 12 23:42:44.862759 systemd-logind[1515]: Removed session 9. Sep 12 23:42:49.876912 systemd[1]: Started sshd@9-10.0.0.74:22-10.0.0.1:37406.service - OpenSSH per-connection server daemon (10.0.0.1:37406). Sep 12 23:42:49.929736 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 37406 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:42:49.930866 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:42:49.934291 systemd-logind[1515]: New session 10 of user core. Sep 12 23:42:49.953385 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 23:42:50.064660 sshd[4041]: Connection closed by 10.0.0.1 port 37406 Sep 12 23:42:50.065385 sshd-session[4039]: pam_unix(sshd:session): session closed for user core Sep 12 23:42:50.074435 systemd[1]: sshd@9-10.0.0.74:22-10.0.0.1:37406.service: Deactivated successfully. Sep 12 23:42:50.075971 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 23:42:50.076673 systemd-logind[1515]: Session 10 logged out. Waiting for processes to exit. Sep 12 23:42:50.080521 systemd[1]: Started sshd@10-10.0.0.74:22-10.0.0.1:51198.service - OpenSSH per-connection server daemon (10.0.0.1:51198). Sep 12 23:42:50.081175 systemd-logind[1515]: Removed session 10. Sep 12 23:42:50.138018 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 51198 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:42:50.139318 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:42:50.143681 systemd-logind[1515]: New session 11 of user core. Sep 12 23:42:50.149426 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 23:42:50.315374 sshd[4057]: Connection closed by 10.0.0.1 port 51198 Sep 12 23:42:50.316423 sshd-session[4055]: pam_unix(sshd:session): session closed for user core Sep 12 23:42:50.327948 systemd[1]: sshd@10-10.0.0.74:22-10.0.0.1:51198.service: Deactivated successfully. Sep 12 23:42:50.333312 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 23:42:50.335063 systemd-logind[1515]: Session 11 logged out. Waiting for processes to exit. Sep 12 23:42:50.339133 systemd[1]: Started sshd@11-10.0.0.74:22-10.0.0.1:51208.service - OpenSSH per-connection server daemon (10.0.0.1:51208). Sep 12 23:42:50.340218 systemd-logind[1515]: Removed session 11. Sep 12 23:42:50.397455 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 51208 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:42:50.398594 sshd-session[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:42:50.403519 systemd-logind[1515]: New session 12 of user core. Sep 12 23:42:50.418437 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 23:42:50.531946 sshd[4071]: Connection closed by 10.0.0.1 port 51208 Sep 12 23:42:50.532277 sshd-session[4069]: pam_unix(sshd:session): session closed for user core Sep 12 23:42:50.535860 systemd[1]: sshd@11-10.0.0.74:22-10.0.0.1:51208.service: Deactivated successfully. Sep 12 23:42:50.537644 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 23:42:50.538489 systemd-logind[1515]: Session 12 logged out. Waiting for processes to exit. Sep 12 23:42:50.539823 systemd-logind[1515]: Removed session 12. Sep 12 23:42:55.550460 systemd[1]: Started sshd@12-10.0.0.74:22-10.0.0.1:51214.service - OpenSSH per-connection server daemon (10.0.0.1:51214). Sep 12 23:42:55.589114 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 51214 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:42:55.590341 sshd-session[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:42:55.594634 systemd-logind[1515]: New session 13 of user core. Sep 12 23:42:55.600403 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 23:42:55.708181 sshd[4087]: Connection closed by 10.0.0.1 port 51214 Sep 12 23:42:55.708501 sshd-session[4085]: pam_unix(sshd:session): session closed for user core Sep 12 23:42:55.711526 systemd[1]: sshd@12-10.0.0.74:22-10.0.0.1:51214.service: Deactivated successfully. Sep 12 23:42:55.713310 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 23:42:55.714860 systemd-logind[1515]: Session 13 logged out. Waiting for processes to exit. Sep 12 23:42:55.715947 systemd-logind[1515]: Removed session 13. Sep 12 23:43:00.723336 systemd[1]: Started sshd@13-10.0.0.74:22-10.0.0.1:59490.service - OpenSSH per-connection server daemon (10.0.0.1:59490). Sep 12 23:43:00.776774 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 59490 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:43:00.777912 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:43:00.782169 systemd-logind[1515]: New session 14 of user core. Sep 12 23:43:00.802381 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 23:43:00.912341 sshd[4103]: Connection closed by 10.0.0.1 port 59490 Sep 12 23:43:00.912684 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Sep 12 23:43:00.925579 systemd[1]: sshd@13-10.0.0.74:22-10.0.0.1:59490.service: Deactivated successfully. Sep 12 23:43:00.927531 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 23:43:00.928363 systemd-logind[1515]: Session 14 logged out. Waiting for processes to exit. Sep 12 23:43:00.931305 systemd[1]: Started sshd@14-10.0.0.74:22-10.0.0.1:59494.service - OpenSSH per-connection server daemon (10.0.0.1:59494). Sep 12 23:43:00.931818 systemd-logind[1515]: Removed session 14. Sep 12 23:43:00.986530 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 59494 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:43:00.988251 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:43:00.992090 systemd-logind[1515]: New session 15 of user core. Sep 12 23:43:01.009391 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 23:43:01.182580 sshd[4119]: Connection closed by 10.0.0.1 port 59494 Sep 12 23:43:01.183766 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Sep 12 23:43:01.192222 systemd[1]: sshd@14-10.0.0.74:22-10.0.0.1:59494.service: Deactivated successfully. Sep 12 23:43:01.194486 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 23:43:01.195673 systemd-logind[1515]: Session 15 logged out. Waiting for processes to exit. Sep 12 23:43:01.198055 systemd[1]: Started sshd@15-10.0.0.74:22-10.0.0.1:59504.service - OpenSSH per-connection server daemon (10.0.0.1:59504). Sep 12 23:43:01.199321 systemd-logind[1515]: Removed session 15. Sep 12 23:43:01.250832 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 59504 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:43:01.252138 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:43:01.256014 systemd-logind[1515]: New session 16 of user core. Sep 12 23:43:01.263405 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 23:43:01.854132 sshd[4133]: Connection closed by 10.0.0.1 port 59504 Sep 12 23:43:01.854636 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Sep 12 23:43:01.863877 systemd[1]: sshd@15-10.0.0.74:22-10.0.0.1:59504.service: Deactivated successfully. Sep 12 23:43:01.868666 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 23:43:01.873096 systemd-logind[1515]: Session 16 logged out. Waiting for processes to exit. Sep 12 23:43:01.878889 systemd[1]: Started sshd@16-10.0.0.74:22-10.0.0.1:59518.service - OpenSSH per-connection server daemon (10.0.0.1:59518). Sep 12 23:43:01.881300 systemd-logind[1515]: Removed session 16. Sep 12 23:43:01.934488 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 59518 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:43:01.935805 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:43:01.939445 systemd-logind[1515]: New session 17 of user core. Sep 12 23:43:01.945389 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 23:43:02.163349 sshd[4153]: Connection closed by 10.0.0.1 port 59518 Sep 12 23:43:02.164400 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Sep 12 23:43:02.172848 systemd[1]: sshd@16-10.0.0.74:22-10.0.0.1:59518.service: Deactivated successfully. Sep 12 23:43:02.175462 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 23:43:02.177272 systemd-logind[1515]: Session 17 logged out. Waiting for processes to exit. Sep 12 23:43:02.179627 systemd[1]: Started sshd@17-10.0.0.74:22-10.0.0.1:59522.service - OpenSSH per-connection server daemon (10.0.0.1:59522). Sep 12 23:43:02.180578 systemd-logind[1515]: Removed session 17. Sep 12 23:43:02.238819 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 59522 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:43:02.239932 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:43:02.243766 systemd-logind[1515]: New session 18 of user core. Sep 12 23:43:02.258395 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 23:43:02.364697 sshd[4166]: Connection closed by 10.0.0.1 port 59522 Sep 12 23:43:02.365037 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Sep 12 23:43:02.368540 systemd[1]: sshd@17-10.0.0.74:22-10.0.0.1:59522.service: Deactivated successfully. Sep 12 23:43:02.370826 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 23:43:02.371477 systemd-logind[1515]: Session 18 logged out. Waiting for processes to exit. Sep 12 23:43:02.372976 systemd-logind[1515]: Removed session 18. Sep 12 23:43:07.380484 systemd[1]: Started sshd@18-10.0.0.74:22-10.0.0.1:59524.service - OpenSSH per-connection server daemon (10.0.0.1:59524). Sep 12 23:43:07.436455 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 59524 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:43:07.438052 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:43:07.441667 systemd-logind[1515]: New session 19 of user core. Sep 12 23:43:07.455680 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 23:43:07.560749 sshd[4185]: Connection closed by 10.0.0.1 port 59524 Sep 12 23:43:07.561048 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Sep 12 23:43:07.564137 systemd[1]: sshd@18-10.0.0.74:22-10.0.0.1:59524.service: Deactivated successfully. Sep 12 23:43:07.567296 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 23:43:07.567882 systemd-logind[1515]: Session 19 logged out. Waiting for processes to exit. Sep 12 23:43:07.568911 systemd-logind[1515]: Removed session 19. Sep 12 23:43:12.576422 systemd[1]: Started sshd@19-10.0.0.74:22-10.0.0.1:55832.service - OpenSSH per-connection server daemon (10.0.0.1:55832). Sep 12 23:43:12.632093 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 55832 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:43:12.633173 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:43:12.636630 systemd-logind[1515]: New session 20 of user core. Sep 12 23:43:12.651393 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 23:43:12.762649 sshd[4200]: Connection closed by 10.0.0.1 port 55832 Sep 12 23:43:12.762182 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Sep 12 23:43:12.772512 systemd[1]: sshd@19-10.0.0.74:22-10.0.0.1:55832.service: Deactivated successfully. Sep 12 23:43:12.773943 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 23:43:12.775797 systemd-logind[1515]: Session 20 logged out. Waiting for processes to exit. Sep 12 23:43:12.778427 systemd[1]: Started sshd@20-10.0.0.74:22-10.0.0.1:55836.service - OpenSSH per-connection server daemon (10.0.0.1:55836). Sep 12 23:43:12.778936 systemd-logind[1515]: Removed session 20. Sep 12 23:43:12.823609 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 55836 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:43:12.824643 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:43:12.828258 systemd-logind[1515]: New session 21 of user core. Sep 12 23:43:12.835375 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 23:43:14.650257 containerd[1542]: time="2025-09-12T23:43:14.649855767Z" level=info msg="StopContainer for \"4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f\" with timeout 30 (s)" Sep 12 23:43:14.651776 containerd[1542]: time="2025-09-12T23:43:14.651549388Z" level=info msg="Stop container \"4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f\" with signal terminated" Sep 12 23:43:14.661880 systemd[1]: cri-containerd-4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f.scope: Deactivated successfully. Sep 12 23:43:14.664613 containerd[1542]: time="2025-09-12T23:43:14.664500708Z" level=info msg="received exit event container_id:\"4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f\" id:\"4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f\" pid:3233 exited_at:{seconds:1757720594 nanos:664091103}" Sep 12 23:43:14.664613 containerd[1542]: time="2025-09-12T23:43:14.664592309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f\" id:\"4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f\" pid:3233 exited_at:{seconds:1757720594 nanos:664091103}" Sep 12 23:43:14.675898 containerd[1542]: time="2025-09-12T23:43:14.675860449Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:43:14.681854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f-rootfs.mount: Deactivated successfully. Sep 12 23:43:14.682436 containerd[1542]: time="2025-09-12T23:43:14.682393330Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\" id:\"1413e6004c2aadfbd5464793f11fa2a92ee0093bdac68b36edb70562019457c7\" pid:4244 exited_at:{seconds:1757720594 nanos:682148167}" Sep 12 23:43:14.686660 containerd[1542]: time="2025-09-12T23:43:14.686621582Z" level=info msg="StopContainer for \"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\" with timeout 2 (s)" Sep 12 23:43:14.686956 containerd[1542]: time="2025-09-12T23:43:14.686922946Z" level=info msg="Stop container \"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\" with signal terminated" Sep 12 23:43:14.693492 systemd-networkd[1422]: lxc_health: Link DOWN Sep 12 23:43:14.693497 systemd-networkd[1422]: lxc_health: Lost carrier Sep 12 23:43:14.702127 containerd[1542]: time="2025-09-12T23:43:14.701643968Z" level=info msg="StopContainer for \"4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f\" returns successfully" Sep 12 23:43:14.704745 containerd[1542]: time="2025-09-12T23:43:14.704691246Z" level=info msg="StopPodSandbox for \"35b8a8cb342a819677d0c221f7aa6f6773ccd764f6cf771733497cbba2c00323\"" Sep 12 23:43:14.710496 containerd[1542]: time="2025-09-12T23:43:14.710439437Z" level=info msg="Container to stop \"4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:43:14.711924 systemd[1]: cri-containerd-9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7.scope: Deactivated successfully. Sep 12 23:43:14.712355 systemd[1]: cri-containerd-9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7.scope: Consumed 6.021s CPU time, 123.6M memory peak, 140K read from disk, 12.9M written to disk. Sep 12 23:43:14.712724 containerd[1542]: time="2025-09-12T23:43:14.712694505Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\" id:\"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\" pid:3322 exited_at:{seconds:1757720594 nanos:712324100}" Sep 12 23:43:14.712888 containerd[1542]: time="2025-09-12T23:43:14.712863427Z" level=info msg="received exit event container_id:\"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\" id:\"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\" pid:3322 exited_at:{seconds:1757720594 nanos:712324100}" Sep 12 23:43:14.722676 systemd[1]: cri-containerd-35b8a8cb342a819677d0c221f7aa6f6773ccd764f6cf771733497cbba2c00323.scope: Deactivated successfully. Sep 12 23:43:14.725346 containerd[1542]: time="2025-09-12T23:43:14.725311981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35b8a8cb342a819677d0c221f7aa6f6773ccd764f6cf771733497cbba2c00323\" id:\"35b8a8cb342a819677d0c221f7aa6f6773ccd764f6cf771733497cbba2c00323\" pid:2874 exit_status:137 exited_at:{seconds:1757720594 nanos:724810135}" Sep 12 23:43:14.734961 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7-rootfs.mount: Deactivated successfully. Sep 12 23:43:14.744679 containerd[1542]: time="2025-09-12T23:43:14.744598820Z" level=info msg="StopContainer for \"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\" returns successfully" Sep 12 23:43:14.745138 containerd[1542]: time="2025-09-12T23:43:14.745095306Z" level=info msg="StopPodSandbox for \"ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd\"" Sep 12 23:43:14.745187 containerd[1542]: time="2025-09-12T23:43:14.745153627Z" level=info msg="Container to stop \"eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:43:14.745187 containerd[1542]: time="2025-09-12T23:43:14.745165787Z" level=info msg="Container to stop \"13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:43:14.745187 containerd[1542]: time="2025-09-12T23:43:14.745175307Z" level=info msg="Container to stop \"127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:43:14.745187 containerd[1542]: time="2025-09-12T23:43:14.745183387Z" level=info msg="Container to stop \"a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:43:14.745370 containerd[1542]: time="2025-09-12T23:43:14.745191828Z" level=info msg="Container to stop \"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:43:14.753317 systemd[1]: cri-containerd-ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd.scope: Deactivated successfully. Sep 12 23:43:14.756428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35b8a8cb342a819677d0c221f7aa6f6773ccd764f6cf771733497cbba2c00323-rootfs.mount: Deactivated successfully. Sep 12 23:43:14.761361 containerd[1542]: time="2025-09-12T23:43:14.761316107Z" level=info msg="shim disconnected" id=35b8a8cb342a819677d0c221f7aa6f6773ccd764f6cf771733497cbba2c00323 namespace=k8s.io Sep 12 23:43:14.774310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd-rootfs.mount: Deactivated successfully. Sep 12 23:43:14.776342 containerd[1542]: time="2025-09-12T23:43:14.761349988Z" level=warning msg="cleaning up after shim disconnected" id=35b8a8cb342a819677d0c221f7aa6f6773ccd764f6cf771733497cbba2c00323 namespace=k8s.io Sep 12 23:43:14.776454 containerd[1542]: time="2025-09-12T23:43:14.776344853Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:43:14.776454 containerd[1542]: time="2025-09-12T23:43:14.775864367Z" level=info msg="shim disconnected" id=ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd namespace=k8s.io Sep 12 23:43:14.776498 containerd[1542]: time="2025-09-12T23:43:14.776442815Z" level=warning msg="cleaning up after shim disconnected" id=ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd namespace=k8s.io Sep 12 23:43:14.776498 containerd[1542]: time="2025-09-12T23:43:14.776471335Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:43:14.793940 containerd[1542]: time="2025-09-12T23:43:14.793811470Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd\" id:\"ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd\" pid:2813 exit_status:137 exited_at:{seconds:1757720594 nanos:754683865}" Sep 12 23:43:14.794109 containerd[1542]: time="2025-09-12T23:43:14.794088793Z" level=info msg="received exit event sandbox_id:\"ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd\" exit_status:137 exited_at:{seconds:1757720594 nanos:754683865}" Sep 12 23:43:14.795178 containerd[1542]: time="2025-09-12T23:43:14.794512078Z" level=info msg="received exit event sandbox_id:\"35b8a8cb342a819677d0c221f7aa6f6773ccd764f6cf771733497cbba2c00323\" exit_status:137 exited_at:{seconds:1757720594 nanos:724810135}" Sep 12 23:43:14.795178 containerd[1542]: time="2025-09-12T23:43:14.794580359Z" level=info msg="TearDown network for sandbox \"35b8a8cb342a819677d0c221f7aa6f6773ccd764f6cf771733497cbba2c00323\" successfully" Sep 12 23:43:14.795178 containerd[1542]: time="2025-09-12T23:43:14.794603960Z" level=info msg="StopPodSandbox for \"35b8a8cb342a819677d0c221f7aa6f6773ccd764f6cf771733497cbba2c00323\" returns successfully" Sep 12 23:43:14.795178 containerd[1542]: time="2025-09-12T23:43:14.794877083Z" level=info msg="TearDown network for sandbox \"ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd\" successfully" Sep 12 23:43:14.795178 containerd[1542]: time="2025-09-12T23:43:14.794895283Z" level=info msg="StopPodSandbox for \"ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd\" returns successfully" Sep 12 23:43:14.795394 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35b8a8cb342a819677d0c221f7aa6f6773ccd764f6cf771733497cbba2c00323-shm.mount: Deactivated successfully. Sep 12 23:43:14.795490 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac4b4742b581855fb001494c6d895baee572a005f8b392d9c210c7484d0ad3fd-shm.mount: Deactivated successfully. Sep 12 23:43:14.849655 kubelet[2655]: I0912 23:43:14.849594 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-hostproc\") pod \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " Sep 12 23:43:14.849655 kubelet[2655]: I0912 23:43:14.849664 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-cilium-cgroup\") pod \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " Sep 12 23:43:14.850035 kubelet[2655]: I0912 23:43:14.849681 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-cilium-run\") pod \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " Sep 12 23:43:14.850035 kubelet[2655]: I0912 23:43:14.849695 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-bpf-maps\") pod \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " Sep 12 23:43:14.850035 kubelet[2655]: I0912 23:43:14.849815 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-cilium-config-path\") pod \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " Sep 12 23:43:14.850035 kubelet[2655]: I0912 23:43:14.849837 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-host-proc-sys-kernel\") pod \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " Sep 12 23:43:14.850035 kubelet[2655]: I0912 23:43:14.849907 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-hubble-tls\") pod \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " Sep 12 23:43:14.850035 kubelet[2655]: I0912 23:43:14.849928 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-clustermesh-secrets\") pod \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " Sep 12 23:43:14.850163 kubelet[2655]: I0912 23:43:14.849994 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdrh5\" (UniqueName: \"kubernetes.io/projected/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-kube-api-access-tdrh5\") pod \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " Sep 12 23:43:14.850163 kubelet[2655]: I0912 23:43:14.850035 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5cec8e2-406c-4e4c-847e-768029ca7270-cilium-config-path\") pod \"d5cec8e2-406c-4e4c-847e-768029ca7270\" (UID: \"d5cec8e2-406c-4e4c-847e-768029ca7270\") " Sep 12 23:43:14.850163 kubelet[2655]: I0912 23:43:14.850069 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-xtables-lock\") pod \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " Sep 12 23:43:14.850163 kubelet[2655]: I0912 23:43:14.850085 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-lib-modules\") pod \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " Sep 12 23:43:14.850163 kubelet[2655]: I0912 23:43:14.850102 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wmh7\" (UniqueName: \"kubernetes.io/projected/d5cec8e2-406c-4e4c-847e-768029ca7270-kube-api-access-4wmh7\") pod \"d5cec8e2-406c-4e4c-847e-768029ca7270\" (UID: \"d5cec8e2-406c-4e4c-847e-768029ca7270\") " Sep 12 23:43:14.850163 kubelet[2655]: I0912 23:43:14.850143 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-host-proc-sys-net\") pod \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " Sep 12 23:43:14.850319 kubelet[2655]: I0912 23:43:14.850160 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-etc-cni-netd\") pod \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " Sep 12 23:43:14.850319 kubelet[2655]: I0912 23:43:14.850186 2655 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-cni-path\") pod \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\" (UID: \"052a3dd4-2cc2-4d9e-a3b6-1f630d554c88\") " Sep 12 23:43:14.851285 kubelet[2655]: I0912 23:43:14.851249 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88" (UID: "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:43:14.851327 kubelet[2655]: I0912 23:43:14.851256 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-hostproc" (OuterVolumeSpecName: "hostproc") pod "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88" (UID: "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:43:14.851327 kubelet[2655]: I0912 23:43:14.851310 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88" (UID: "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:43:14.851631 kubelet[2655]: I0912 23:43:14.851604 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88" (UID: "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:43:14.851977 kubelet[2655]: I0912 23:43:14.851817 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-cni-path" (OuterVolumeSpecName: "cni-path") pod "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88" (UID: "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:43:14.852005 kubelet[2655]: I0912 23:43:14.851987 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88" (UID: "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:43:14.852074 kubelet[2655]: I0912 23:43:14.852006 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88" (UID: "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:43:14.853583 kubelet[2655]: I0912 23:43:14.853285 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88" (UID: "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:43:14.853583 kubelet[2655]: I0912 23:43:14.853321 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88" (UID: "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:43:14.853583 kubelet[2655]: I0912 23:43:14.853332 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88" (UID: "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:43:14.853583 kubelet[2655]: I0912 23:43:14.853475 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88" (UID: "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 23:43:14.853838 kubelet[2655]: I0912 23:43:14.853810 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5cec8e2-406c-4e4c-847e-768029ca7270-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d5cec8e2-406c-4e4c-847e-768029ca7270" (UID: "d5cec8e2-406c-4e4c-847e-768029ca7270"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 23:43:14.855048 kubelet[2655]: I0912 23:43:14.855005 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88" (UID: "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 23:43:14.855624 kubelet[2655]: I0912 23:43:14.855586 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88" (UID: "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 23:43:14.856062 kubelet[2655]: I0912 23:43:14.856025 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5cec8e2-406c-4e4c-847e-768029ca7270-kube-api-access-4wmh7" (OuterVolumeSpecName: "kube-api-access-4wmh7") pod "d5cec8e2-406c-4e4c-847e-768029ca7270" (UID: "d5cec8e2-406c-4e4c-847e-768029ca7270"). InnerVolumeSpecName "kube-api-access-4wmh7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 23:43:14.856705 kubelet[2655]: I0912 23:43:14.856673 2655 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-kube-api-access-tdrh5" (OuterVolumeSpecName: "kube-api-access-tdrh5") pod "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88" (UID: "052a3dd4-2cc2-4d9e-a3b6-1f630d554c88"). InnerVolumeSpecName "kube-api-access-tdrh5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 23:43:14.921332 kubelet[2655]: I0912 23:43:14.920533 2655 scope.go:117] "RemoveContainer" containerID="9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7" Sep 12 23:43:14.925530 containerd[1542]: time="2025-09-12T23:43:14.925500181Z" level=info msg="RemoveContainer for \"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\"" Sep 12 23:43:14.926189 systemd[1]: Removed slice kubepods-burstable-pod052a3dd4_2cc2_4d9e_a3b6_1f630d554c88.slice - libcontainer container kubepods-burstable-pod052a3dd4_2cc2_4d9e_a3b6_1f630d554c88.slice. Sep 12 23:43:14.926516 systemd[1]: kubepods-burstable-pod052a3dd4_2cc2_4d9e_a3b6_1f630d554c88.slice: Consumed 6.105s CPU time, 123.9M memory peak, 152K read from disk, 12.9M written to disk. Sep 12 23:43:14.935665 systemd[1]: Removed slice kubepods-besteffort-podd5cec8e2_406c_4e4c_847e_768029ca7270.slice - libcontainer container kubepods-besteffort-podd5cec8e2_406c_4e4c_847e_768029ca7270.slice. Sep 12 23:43:14.939770 containerd[1542]: time="2025-09-12T23:43:14.939723757Z" level=info msg="RemoveContainer for \"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\" returns successfully" Sep 12 23:43:14.940051 kubelet[2655]: I0912 23:43:14.940021 2655 scope.go:117] "RemoveContainer" containerID="127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d" Sep 12 23:43:14.943215 containerd[1542]: time="2025-09-12T23:43:14.943033398Z" level=info msg="RemoveContainer for \"127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d\"" Sep 12 23:43:14.947976 containerd[1542]: time="2025-09-12T23:43:14.947935379Z" level=info msg="RemoveContainer for \"127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d\" returns successfully" Sep 12 23:43:14.948213 kubelet[2655]: I0912 23:43:14.948188 2655 scope.go:117] "RemoveContainer" containerID="13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5" Sep 12 23:43:14.950817 kubelet[2655]: I0912 23:43:14.950796 2655 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tdrh5\" (UniqueName: \"kubernetes.io/projected/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-kube-api-access-tdrh5\") on node \"localhost\" DevicePath \"\"" Sep 12 23:43:14.950817 kubelet[2655]: I0912 23:43:14.950819 2655 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5cec8e2-406c-4e4c-847e-768029ca7270-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 23:43:14.950901 kubelet[2655]: I0912 23:43:14.950830 2655 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 12 23:43:14.950901 kubelet[2655]: I0912 23:43:14.950838 2655 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 12 23:43:14.950901 kubelet[2655]: I0912 23:43:14.950846 2655 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4wmh7\" (UniqueName: \"kubernetes.io/projected/d5cec8e2-406c-4e4c-847e-768029ca7270-kube-api-access-4wmh7\") on node \"localhost\" DevicePath \"\"" Sep 12 23:43:14.950901 kubelet[2655]: I0912 23:43:14.950855 2655 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 12 23:43:14.950901 kubelet[2655]: I0912 23:43:14.950863 2655 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 12 23:43:14.950901 kubelet[2655]: I0912 23:43:14.950871 2655 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 12 23:43:14.950901 kubelet[2655]: I0912 23:43:14.950879 2655 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 12 23:43:14.950901 kubelet[2655]: I0912 23:43:14.950886 2655 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 12 23:43:14.951074 kubelet[2655]: I0912 23:43:14.950894 2655 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 12 23:43:14.951074 kubelet[2655]: I0912 23:43:14.950901 2655 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 12 23:43:14.951074 kubelet[2655]: I0912 23:43:14.950908 2655 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 23:43:14.951074 kubelet[2655]: I0912 23:43:14.950917 2655 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 12 23:43:14.951074 kubelet[2655]: I0912 23:43:14.950925 2655 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 12 23:43:14.951074 kubelet[2655]: I0912 23:43:14.950931 2655 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 12 23:43:14.952277 containerd[1542]: time="2025-09-12T23:43:14.951701345Z" level=info msg="RemoveContainer for \"13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5\"" Sep 12 23:43:14.955110 containerd[1542]: time="2025-09-12T23:43:14.955076467Z" level=info msg="RemoveContainer for \"13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5\" returns successfully" Sep 12 23:43:14.955274 kubelet[2655]: I0912 23:43:14.955233 2655 scope.go:117] "RemoveContainer" containerID="a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db" Sep 12 23:43:14.956617 containerd[1542]: time="2025-09-12T23:43:14.956586886Z" level=info msg="RemoveContainer for \"a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db\"" Sep 12 23:43:14.959284 containerd[1542]: time="2025-09-12T23:43:14.959234119Z" level=info msg="RemoveContainer for \"a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db\" returns successfully" Sep 12 23:43:14.959436 kubelet[2655]: I0912 23:43:14.959412 2655 scope.go:117] "RemoveContainer" containerID="eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841" Sep 12 23:43:14.960979 containerd[1542]: time="2025-09-12T23:43:14.960952340Z" level=info msg="RemoveContainer for \"eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841\"" Sep 12 23:43:14.963488 containerd[1542]: time="2025-09-12T23:43:14.963464891Z" level=info msg="RemoveContainer for \"eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841\" returns successfully" Sep 12 23:43:14.963765 kubelet[2655]: I0912 23:43:14.963726 2655 scope.go:117] "RemoveContainer" containerID="9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7" Sep 12 23:43:14.963950 containerd[1542]: time="2025-09-12T23:43:14.963916737Z" level=error msg="ContainerStatus for \"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\": not found" Sep 12 23:43:14.967958 kubelet[2655]: E0912 23:43:14.967912 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\": not found" containerID="9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7" Sep 12 23:43:14.968019 kubelet[2655]: I0912 23:43:14.967957 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7"} err="failed to get container status \"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ac67d8afbd67c1d228d87f336f28f8f3ecd43d67b85e821945f67d1c6d8c8f7\": not found" Sep 12 23:43:14.968019 kubelet[2655]: I0912 23:43:14.967996 2655 scope.go:117] "RemoveContainer" containerID="127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d" Sep 12 23:43:14.968347 containerd[1542]: time="2025-09-12T23:43:14.968293951Z" level=error msg="ContainerStatus for \"127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d\": not found" Sep 12 23:43:14.968461 kubelet[2655]: E0912 23:43:14.968439 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d\": not found" containerID="127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d" Sep 12 23:43:14.968523 kubelet[2655]: I0912 23:43:14.968501 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d"} err="failed to get container status \"127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d\": rpc error: code = NotFound desc = an error occurred when try to find container \"127eff2b1633781ce7dc9193bdfc88348b6fe786ee4f3375f1d5d2b90b999d5d\": not found" Sep 12 23:43:14.968545 kubelet[2655]: I0912 23:43:14.968524 2655 scope.go:117] "RemoveContainer" containerID="13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5" Sep 12 23:43:14.968785 containerd[1542]: time="2025-09-12T23:43:14.968708956Z" level=error msg="ContainerStatus for \"13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5\": not found" Sep 12 23:43:14.968880 kubelet[2655]: E0912 23:43:14.968852 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5\": not found" containerID="13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5" Sep 12 23:43:14.968915 kubelet[2655]: I0912 23:43:14.968885 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5"} err="failed to get container status \"13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"13501cf07bd8ff265207d483edcbbf3dd8768df716c27811c249fe07146871f5\": not found" Sep 12 23:43:14.968915 kubelet[2655]: I0912 23:43:14.968905 2655 scope.go:117] "RemoveContainer" containerID="a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db" Sep 12 23:43:14.969100 containerd[1542]: time="2025-09-12T23:43:14.969071001Z" level=error msg="ContainerStatus for \"a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db\": not found" Sep 12 23:43:14.969215 kubelet[2655]: E0912 23:43:14.969194 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db\": not found" containerID="a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db" Sep 12 23:43:14.969264 kubelet[2655]: I0912 23:43:14.969223 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db"} err="failed to get container status \"a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db\": rpc error: code = NotFound desc = an error occurred when try to find container \"a613b88bdcbd8523e19753344dba1cf2c4429ee9c40a6474c614b904145122db\": not found" Sep 12 23:43:14.969264 kubelet[2655]: I0912 23:43:14.969252 2655 scope.go:117] "RemoveContainer" containerID="eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841" Sep 12 23:43:14.969423 containerd[1542]: time="2025-09-12T23:43:14.969393685Z" level=error msg="ContainerStatus for \"eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841\": not found" Sep 12 23:43:14.969537 kubelet[2655]: E0912 23:43:14.969518 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841\": not found" containerID="eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841" Sep 12 23:43:14.969579 kubelet[2655]: I0912 23:43:14.969539 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841"} err="failed to get container status \"eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841\": rpc error: code = NotFound desc = an error occurred when try to find container \"eaa2b6a32926d359335aacf2bca0ccb58801d566afbdec616f5314a82e26c841\": not found" Sep 12 23:43:14.969579 kubelet[2655]: I0912 23:43:14.969553 2655 scope.go:117] "RemoveContainer" containerID="4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f" Sep 12 23:43:14.970970 containerd[1542]: time="2025-09-12T23:43:14.970932904Z" level=info msg="RemoveContainer for \"4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f\"" Sep 12 23:43:14.973550 containerd[1542]: time="2025-09-12T23:43:14.973518056Z" level=info msg="RemoveContainer for \"4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f\" returns successfully" Sep 12 23:43:14.973779 kubelet[2655]: I0912 23:43:14.973700 2655 scope.go:117] "RemoveContainer" containerID="4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f" Sep 12 23:43:14.973950 containerd[1542]: time="2025-09-12T23:43:14.973892980Z" level=error msg="ContainerStatus for \"4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f\": not found" Sep 12 23:43:14.974077 kubelet[2655]: E0912 23:43:14.974052 2655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f\": not found" containerID="4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f" Sep 12 23:43:14.974109 kubelet[2655]: I0912 23:43:14.974083 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f"} err="failed to get container status \"4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f\": rpc error: code = NotFound desc = an error occurred when try to find container \"4dcfa8c513a0a9d33ef4361ac593816e611681eba36437291e8c21deff0d453f\": not found" Sep 12 23:43:15.681848 systemd[1]: var-lib-kubelet-pods-d5cec8e2\x2d406c\x2d4e4c\x2d847e\x2d768029ca7270-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4wmh7.mount: Deactivated successfully. Sep 12 23:43:15.681949 systemd[1]: var-lib-kubelet-pods-052a3dd4\x2d2cc2\x2d4d9e\x2da3b6\x2d1f630d554c88-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtdrh5.mount: Deactivated successfully. Sep 12 23:43:15.681996 systemd[1]: var-lib-kubelet-pods-052a3dd4\x2d2cc2\x2d4d9e\x2da3b6\x2d1f630d554c88-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 23:43:15.682043 systemd[1]: var-lib-kubelet-pods-052a3dd4\x2d2cc2\x2d4d9e\x2da3b6\x2d1f630d554c88-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 23:43:16.616737 sshd[4217]: Connection closed by 10.0.0.1 port 55836 Sep 12 23:43:16.617309 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Sep 12 23:43:16.628708 systemd[1]: sshd@20-10.0.0.74:22-10.0.0.1:55836.service: Deactivated successfully. Sep 12 23:43:16.630401 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 23:43:16.631314 systemd[1]: session-21.scope: Consumed 1.163s CPU time, 22.9M memory peak. Sep 12 23:43:16.631849 systemd-logind[1515]: Session 21 logged out. Waiting for processes to exit. Sep 12 23:43:16.633895 systemd[1]: Started sshd@21-10.0.0.74:22-10.0.0.1:55838.service - OpenSSH per-connection server daemon (10.0.0.1:55838). Sep 12 23:43:16.634977 systemd-logind[1515]: Removed session 21. Sep 12 23:43:16.691257 sshd[4369]: Accepted publickey for core from 10.0.0.1 port 55838 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:43:16.692541 sshd-session[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:43:16.696915 systemd-logind[1515]: New session 22 of user core. Sep 12 23:43:16.705398 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 23:43:16.749453 kubelet[2655]: I0912 23:43:16.749403 2655 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="052a3dd4-2cc2-4d9e-a3b6-1f630d554c88" path="/var/lib/kubelet/pods/052a3dd4-2cc2-4d9e-a3b6-1f630d554c88/volumes" Sep 12 23:43:16.750455 kubelet[2655]: I0912 23:43:16.750428 2655 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5cec8e2-406c-4e4c-847e-768029ca7270" path="/var/lib/kubelet/pods/d5cec8e2-406c-4e4c-847e-768029ca7270/volumes" Sep 12 23:43:17.637501 sshd[4371]: Connection closed by 10.0.0.1 port 55838 Sep 12 23:43:17.635999 sshd-session[4369]: pam_unix(sshd:session): session closed for user core Sep 12 23:43:17.647798 systemd[1]: sshd@21-10.0.0.74:22-10.0.0.1:55838.service: Deactivated successfully. Sep 12 23:43:17.650235 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 23:43:17.652135 systemd-logind[1515]: Session 22 logged out. Waiting for processes to exit. Sep 12 23:43:17.659178 systemd[1]: Started sshd@22-10.0.0.74:22-10.0.0.1:55844.service - OpenSSH per-connection server daemon (10.0.0.1:55844). Sep 12 23:43:17.661862 systemd-logind[1515]: Removed session 22. Sep 12 23:43:17.675986 systemd[1]: Created slice kubepods-burstable-podd09dc6c9_2dcd_48b7_a3ed_acf55778351f.slice - libcontainer container kubepods-burstable-podd09dc6c9_2dcd_48b7_a3ed_acf55778351f.slice. Sep 12 23:43:17.715982 sshd[4383]: Accepted publickey for core from 10.0.0.1 port 55844 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:43:17.717167 sshd-session[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:43:17.721573 systemd-logind[1515]: New session 23 of user core. Sep 12 23:43:17.737430 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 23:43:17.765547 kubelet[2655]: I0912 23:43:17.765454 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d09dc6c9-2dcd-48b7-a3ed-acf55778351f-host-proc-sys-kernel\") pod \"cilium-wxc9r\" (UID: \"d09dc6c9-2dcd-48b7-a3ed-acf55778351f\") " pod="kube-system/cilium-wxc9r" Sep 12 23:43:17.765547 kubelet[2655]: I0912 23:43:17.765501 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d09dc6c9-2dcd-48b7-a3ed-acf55778351f-hubble-tls\") pod \"cilium-wxc9r\" (UID: \"d09dc6c9-2dcd-48b7-a3ed-acf55778351f\") " pod="kube-system/cilium-wxc9r" Sep 12 23:43:17.765547 kubelet[2655]: I0912 23:43:17.765523 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d09dc6c9-2dcd-48b7-a3ed-acf55778351f-clustermesh-secrets\") pod \"cilium-wxc9r\" (UID: \"d09dc6c9-2dcd-48b7-a3ed-acf55778351f\") " pod="kube-system/cilium-wxc9r" Sep 12 23:43:17.766268 kubelet[2655]: I0912 23:43:17.765930 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc8dd\" (UniqueName: \"kubernetes.io/projected/d09dc6c9-2dcd-48b7-a3ed-acf55778351f-kube-api-access-cc8dd\") pod \"cilium-wxc9r\" (UID: \"d09dc6c9-2dcd-48b7-a3ed-acf55778351f\") " pod="kube-system/cilium-wxc9r" Sep 12 23:43:17.766268 kubelet[2655]: I0912 23:43:17.765998 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d09dc6c9-2dcd-48b7-a3ed-acf55778351f-cilium-cgroup\") pod \"cilium-wxc9r\" (UID: \"d09dc6c9-2dcd-48b7-a3ed-acf55778351f\") " pod="kube-system/cilium-wxc9r" Sep 12 23:43:17.766268 kubelet[2655]: I0912 23:43:17.766019 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d09dc6c9-2dcd-48b7-a3ed-acf55778351f-cilium-ipsec-secrets\") pod \"cilium-wxc9r\" (UID: \"d09dc6c9-2dcd-48b7-a3ed-acf55778351f\") " pod="kube-system/cilium-wxc9r" Sep 12 23:43:17.766268 kubelet[2655]: I0912 23:43:17.766037 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d09dc6c9-2dcd-48b7-a3ed-acf55778351f-cni-path\") pod \"cilium-wxc9r\" (UID: \"d09dc6c9-2dcd-48b7-a3ed-acf55778351f\") " pod="kube-system/cilium-wxc9r" Sep 12 23:43:17.766268 kubelet[2655]: I0912 23:43:17.766051 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d09dc6c9-2dcd-48b7-a3ed-acf55778351f-host-proc-sys-net\") pod \"cilium-wxc9r\" (UID: \"d09dc6c9-2dcd-48b7-a3ed-acf55778351f\") " pod="kube-system/cilium-wxc9r" Sep 12 23:43:17.766268 kubelet[2655]: I0912 23:43:17.766080 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d09dc6c9-2dcd-48b7-a3ed-acf55778351f-bpf-maps\") pod \"cilium-wxc9r\" (UID: \"d09dc6c9-2dcd-48b7-a3ed-acf55778351f\") " pod="kube-system/cilium-wxc9r" Sep 12 23:43:17.766419 kubelet[2655]: I0912 23:43:17.766097 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d09dc6c9-2dcd-48b7-a3ed-acf55778351f-hostproc\") pod \"cilium-wxc9r\" (UID: \"d09dc6c9-2dcd-48b7-a3ed-acf55778351f\") " pod="kube-system/cilium-wxc9r" Sep 12 23:43:17.766419 kubelet[2655]: I0912 23:43:17.766113 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d09dc6c9-2dcd-48b7-a3ed-acf55778351f-cilium-run\") pod \"cilium-wxc9r\" (UID: \"d09dc6c9-2dcd-48b7-a3ed-acf55778351f\") " pod="kube-system/cilium-wxc9r" Sep 12 23:43:17.766419 kubelet[2655]: I0912 23:43:17.766128 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d09dc6c9-2dcd-48b7-a3ed-acf55778351f-lib-modules\") pod \"cilium-wxc9r\" (UID: \"d09dc6c9-2dcd-48b7-a3ed-acf55778351f\") " pod="kube-system/cilium-wxc9r" Sep 12 23:43:17.766419 kubelet[2655]: I0912 23:43:17.766165 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d09dc6c9-2dcd-48b7-a3ed-acf55778351f-etc-cni-netd\") pod \"cilium-wxc9r\" (UID: \"d09dc6c9-2dcd-48b7-a3ed-acf55778351f\") " pod="kube-system/cilium-wxc9r" Sep 12 23:43:17.766419 kubelet[2655]: I0912 23:43:17.766192 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d09dc6c9-2dcd-48b7-a3ed-acf55778351f-xtables-lock\") pod \"cilium-wxc9r\" (UID: \"d09dc6c9-2dcd-48b7-a3ed-acf55778351f\") " pod="kube-system/cilium-wxc9r" Sep 12 23:43:17.766419 kubelet[2655]: I0912 23:43:17.766206 2655 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d09dc6c9-2dcd-48b7-a3ed-acf55778351f-cilium-config-path\") pod \"cilium-wxc9r\" (UID: \"d09dc6c9-2dcd-48b7-a3ed-acf55778351f\") " pod="kube-system/cilium-wxc9r" Sep 12 23:43:17.785276 sshd[4385]: Connection closed by 10.0.0.1 port 55844 Sep 12 23:43:17.785815 sshd-session[4383]: pam_unix(sshd:session): session closed for user core Sep 12 23:43:17.795086 kubelet[2655]: E0912 23:43:17.795050 2655 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 23:43:17.798208 systemd[1]: sshd@22-10.0.0.74:22-10.0.0.1:55844.service: Deactivated successfully. Sep 12 23:43:17.799602 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 23:43:17.801782 systemd-logind[1515]: Session 23 logged out. Waiting for processes to exit. Sep 12 23:43:17.803833 systemd[1]: Started sshd@23-10.0.0.74:22-10.0.0.1:55856.service - OpenSSH per-connection server daemon (10.0.0.1:55856). Sep 12 23:43:17.805137 systemd-logind[1515]: Removed session 23. Sep 12 23:43:17.857629 sshd[4392]: Accepted publickey for core from 10.0.0.1 port 55856 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:43:17.858994 sshd-session[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:43:17.863385 systemd-logind[1515]: New session 24 of user core. Sep 12 23:43:17.871398 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 23:43:17.985649 containerd[1542]: time="2025-09-12T23:43:17.985550860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wxc9r,Uid:d09dc6c9-2dcd-48b7-a3ed-acf55778351f,Namespace:kube-system,Attempt:0,}" Sep 12 23:43:18.002679 containerd[1542]: time="2025-09-12T23:43:18.002621986Z" level=info msg="connecting to shim bf5a85db5da0cbceb827333d88dc159b389ca32c31f8b6a09b070825c1b33232" address="unix:///run/containerd/s/4a5d48819934d3eb0522b32adef44815b5e47f1e0878552acb46da3ce1e3ad3a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:43:18.030618 systemd[1]: Started cri-containerd-bf5a85db5da0cbceb827333d88dc159b389ca32c31f8b6a09b070825c1b33232.scope - libcontainer container bf5a85db5da0cbceb827333d88dc159b389ca32c31f8b6a09b070825c1b33232. Sep 12 23:43:18.053979 containerd[1542]: time="2025-09-12T23:43:18.053937398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wxc9r,Uid:d09dc6c9-2dcd-48b7-a3ed-acf55778351f,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf5a85db5da0cbceb827333d88dc159b389ca32c31f8b6a09b070825c1b33232\"" Sep 12 23:43:18.058263 containerd[1542]: time="2025-09-12T23:43:18.058059408Z" level=info msg="CreateContainer within sandbox \"bf5a85db5da0cbceb827333d88dc159b389ca32c31f8b6a09b070825c1b33232\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 23:43:18.067893 containerd[1542]: time="2025-09-12T23:43:18.067857604Z" level=info msg="Container 60e358597054c8fc747f807953354a3140306c79c268982b22e9fd8b6a1b9cd7: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:43:18.073244 containerd[1542]: time="2025-09-12T23:43:18.073205988Z" level=info msg="CreateContainer within sandbox \"bf5a85db5da0cbceb827333d88dc159b389ca32c31f8b6a09b070825c1b33232\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"60e358597054c8fc747f807953354a3140306c79c268982b22e9fd8b6a1b9cd7\"" Sep 12 23:43:18.073758 containerd[1542]: time="2025-09-12T23:43:18.073632753Z" level=info msg="StartContainer for \"60e358597054c8fc747f807953354a3140306c79c268982b22e9fd8b6a1b9cd7\"" Sep 12 23:43:18.074798 containerd[1542]: time="2025-09-12T23:43:18.074762847Z" level=info msg="connecting to shim 60e358597054c8fc747f807953354a3140306c79c268982b22e9fd8b6a1b9cd7" address="unix:///run/containerd/s/4a5d48819934d3eb0522b32adef44815b5e47f1e0878552acb46da3ce1e3ad3a" protocol=ttrpc version=3 Sep 12 23:43:18.113408 systemd[1]: Started cri-containerd-60e358597054c8fc747f807953354a3140306c79c268982b22e9fd8b6a1b9cd7.scope - libcontainer container 60e358597054c8fc747f807953354a3140306c79c268982b22e9fd8b6a1b9cd7. Sep 12 23:43:18.138188 containerd[1542]: time="2025-09-12T23:43:18.138153564Z" level=info msg="StartContainer for \"60e358597054c8fc747f807953354a3140306c79c268982b22e9fd8b6a1b9cd7\" returns successfully" Sep 12 23:43:18.147264 systemd[1]: cri-containerd-60e358597054c8fc747f807953354a3140306c79c268982b22e9fd8b6a1b9cd7.scope: Deactivated successfully. Sep 12 23:43:18.149875 containerd[1542]: time="2025-09-12T23:43:18.149741822Z" level=info msg="received exit event container_id:\"60e358597054c8fc747f807953354a3140306c79c268982b22e9fd8b6a1b9cd7\" id:\"60e358597054c8fc747f807953354a3140306c79c268982b22e9fd8b6a1b9cd7\" pid:4464 exited_at:{seconds:1757720598 nanos:148177443}" Sep 12 23:43:18.149978 containerd[1542]: time="2025-09-12T23:43:18.149845943Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60e358597054c8fc747f807953354a3140306c79c268982b22e9fd8b6a1b9cd7\" id:\"60e358597054c8fc747f807953354a3140306c79c268982b22e9fd8b6a1b9cd7\" pid:4464 exited_at:{seconds:1757720598 nanos:148177443}" Sep 12 23:43:18.945906 containerd[1542]: time="2025-09-12T23:43:18.945323199Z" level=info msg="CreateContainer within sandbox \"bf5a85db5da0cbceb827333d88dc159b389ca32c31f8b6a09b070825c1b33232\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 23:43:18.952623 containerd[1542]: time="2025-09-12T23:43:18.952576966Z" level=info msg="Container 511a8aec80c85f3007c3a54e698265760d7ceeb83804b8e55ed92145bdf1f0f6: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:43:18.961968 containerd[1542]: time="2025-09-12T23:43:18.961366311Z" level=info msg="CreateContainer within sandbox \"bf5a85db5da0cbceb827333d88dc159b389ca32c31f8b6a09b070825c1b33232\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"511a8aec80c85f3007c3a54e698265760d7ceeb83804b8e55ed92145bdf1f0f6\"" Sep 12 23:43:18.962386 containerd[1542]: time="2025-09-12T23:43:18.962360403Z" level=info msg="StartContainer for \"511a8aec80c85f3007c3a54e698265760d7ceeb83804b8e55ed92145bdf1f0f6\"" Sep 12 23:43:18.963755 containerd[1542]: time="2025-09-12T23:43:18.963724859Z" level=info msg="connecting to shim 511a8aec80c85f3007c3a54e698265760d7ceeb83804b8e55ed92145bdf1f0f6" address="unix:///run/containerd/s/4a5d48819934d3eb0522b32adef44815b5e47f1e0878552acb46da3ce1e3ad3a" protocol=ttrpc version=3 Sep 12 23:43:18.985415 systemd[1]: Started cri-containerd-511a8aec80c85f3007c3a54e698265760d7ceeb83804b8e55ed92145bdf1f0f6.scope - libcontainer container 511a8aec80c85f3007c3a54e698265760d7ceeb83804b8e55ed92145bdf1f0f6. Sep 12 23:43:19.009800 containerd[1542]: time="2025-09-12T23:43:19.009761768Z" level=info msg="StartContainer for \"511a8aec80c85f3007c3a54e698265760d7ceeb83804b8e55ed92145bdf1f0f6\" returns successfully" Sep 12 23:43:19.017707 systemd[1]: cri-containerd-511a8aec80c85f3007c3a54e698265760d7ceeb83804b8e55ed92145bdf1f0f6.scope: Deactivated successfully. Sep 12 23:43:19.018658 containerd[1542]: time="2025-09-12T23:43:19.018608433Z" level=info msg="received exit event container_id:\"511a8aec80c85f3007c3a54e698265760d7ceeb83804b8e55ed92145bdf1f0f6\" id:\"511a8aec80c85f3007c3a54e698265760d7ceeb83804b8e55ed92145bdf1f0f6\" pid:4510 exited_at:{seconds:1757720599 nanos:18288909}" Sep 12 23:43:19.018951 containerd[1542]: time="2025-09-12T23:43:19.018905836Z" level=info msg="TaskExit event in podsandbox handler container_id:\"511a8aec80c85f3007c3a54e698265760d7ceeb83804b8e55ed92145bdf1f0f6\" id:\"511a8aec80c85f3007c3a54e698265760d7ceeb83804b8e55ed92145bdf1f0f6\" pid:4510 exited_at:{seconds:1757720599 nanos:18288909}" Sep 12 23:43:19.873680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-511a8aec80c85f3007c3a54e698265760d7ceeb83804b8e55ed92145bdf1f0f6-rootfs.mount: Deactivated successfully. Sep 12 23:43:19.949436 containerd[1542]: time="2025-09-12T23:43:19.949398255Z" level=info msg="CreateContainer within sandbox \"bf5a85db5da0cbceb827333d88dc159b389ca32c31f8b6a09b070825c1b33232\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 23:43:19.959158 containerd[1542]: time="2025-09-12T23:43:19.958599124Z" level=info msg="Container 55a4ad37cae49ffc09773f02f2826582525f4996bce24f2a47a97f1b59e87e03: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:43:19.964946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1134631304.mount: Deactivated successfully. Sep 12 23:43:19.967797 containerd[1542]: time="2025-09-12T23:43:19.967672912Z" level=info msg="CreateContainer within sandbox \"bf5a85db5da0cbceb827333d88dc159b389ca32c31f8b6a09b070825c1b33232\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"55a4ad37cae49ffc09773f02f2826582525f4996bce24f2a47a97f1b59e87e03\"" Sep 12 23:43:19.968135 containerd[1542]: time="2025-09-12T23:43:19.968099317Z" level=info msg="StartContainer for \"55a4ad37cae49ffc09773f02f2826582525f4996bce24f2a47a97f1b59e87e03\"" Sep 12 23:43:19.970250 containerd[1542]: time="2025-09-12T23:43:19.970210102Z" level=info msg="connecting to shim 55a4ad37cae49ffc09773f02f2826582525f4996bce24f2a47a97f1b59e87e03" address="unix:///run/containerd/s/4a5d48819934d3eb0522b32adef44815b5e47f1e0878552acb46da3ce1e3ad3a" protocol=ttrpc version=3 Sep 12 23:43:20.003397 systemd[1]: Started cri-containerd-55a4ad37cae49ffc09773f02f2826582525f4996bce24f2a47a97f1b59e87e03.scope - libcontainer container 55a4ad37cae49ffc09773f02f2826582525f4996bce24f2a47a97f1b59e87e03. Sep 12 23:43:20.047022 containerd[1542]: time="2025-09-12T23:43:20.046950647Z" level=info msg="StartContainer for \"55a4ad37cae49ffc09773f02f2826582525f4996bce24f2a47a97f1b59e87e03\" returns successfully" Sep 12 23:43:20.049056 systemd[1]: cri-containerd-55a4ad37cae49ffc09773f02f2826582525f4996bce24f2a47a97f1b59e87e03.scope: Deactivated successfully. Sep 12 23:43:20.050632 containerd[1542]: time="2025-09-12T23:43:20.050595129Z" level=info msg="received exit event container_id:\"55a4ad37cae49ffc09773f02f2826582525f4996bce24f2a47a97f1b59e87e03\" id:\"55a4ad37cae49ffc09773f02f2826582525f4996bce24f2a47a97f1b59e87e03\" pid:4556 exited_at:{seconds:1757720600 nanos:50208605}" Sep 12 23:43:20.050899 containerd[1542]: time="2025-09-12T23:43:20.050841652Z" level=info msg="TaskExit event in podsandbox handler container_id:\"55a4ad37cae49ffc09773f02f2826582525f4996bce24f2a47a97f1b59e87e03\" id:\"55a4ad37cae49ffc09773f02f2826582525f4996bce24f2a47a97f1b59e87e03\" pid:4556 exited_at:{seconds:1757720600 nanos:50208605}" Sep 12 23:43:20.072274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55a4ad37cae49ffc09773f02f2826582525f4996bce24f2a47a97f1b59e87e03-rootfs.mount: Deactivated successfully. Sep 12 23:43:20.966135 containerd[1542]: time="2025-09-12T23:43:20.965740204Z" level=info msg="CreateContainer within sandbox \"bf5a85db5da0cbceb827333d88dc159b389ca32c31f8b6a09b070825c1b33232\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 23:43:20.974734 containerd[1542]: time="2025-09-12T23:43:20.973984529Z" level=info msg="Container 59e2d9e3686f6eed4e0e209b8076e61cbf0b6318fc2c8307066ac5f8acd03391: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:43:20.996265 containerd[1542]: time="2025-09-12T23:43:20.996208641Z" level=info msg="CreateContainer within sandbox \"bf5a85db5da0cbceb827333d88dc159b389ca32c31f8b6a09b070825c1b33232\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"59e2d9e3686f6eed4e0e209b8076e61cbf0b6318fc2c8307066ac5f8acd03391\"" Sep 12 23:43:20.996884 containerd[1542]: time="2025-09-12T23:43:20.996847487Z" level=info msg="StartContainer for \"59e2d9e3686f6eed4e0e209b8076e61cbf0b6318fc2c8307066ac5f8acd03391\"" Sep 12 23:43:20.998007 containerd[1542]: time="2025-09-12T23:43:20.997921019Z" level=info msg="connecting to shim 59e2d9e3686f6eed4e0e209b8076e61cbf0b6318fc2c8307066ac5f8acd03391" address="unix:///run/containerd/s/4a5d48819934d3eb0522b32adef44815b5e47f1e0878552acb46da3ce1e3ad3a" protocol=ttrpc version=3 Sep 12 23:43:21.021449 systemd[1]: Started cri-containerd-59e2d9e3686f6eed4e0e209b8076e61cbf0b6318fc2c8307066ac5f8acd03391.scope - libcontainer container 59e2d9e3686f6eed4e0e209b8076e61cbf0b6318fc2c8307066ac5f8acd03391. Sep 12 23:43:21.048753 systemd[1]: cri-containerd-59e2d9e3686f6eed4e0e209b8076e61cbf0b6318fc2c8307066ac5f8acd03391.scope: Deactivated successfully. Sep 12 23:43:21.051075 containerd[1542]: time="2025-09-12T23:43:21.050352210Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59e2d9e3686f6eed4e0e209b8076e61cbf0b6318fc2c8307066ac5f8acd03391\" id:\"59e2d9e3686f6eed4e0e209b8076e61cbf0b6318fc2c8307066ac5f8acd03391\" pid:4595 exited_at:{seconds:1757720601 nanos:49120612}" Sep 12 23:43:21.052129 containerd[1542]: time="2025-09-12T23:43:21.050608610Z" level=info msg="received exit event container_id:\"59e2d9e3686f6eed4e0e209b8076e61cbf0b6318fc2c8307066ac5f8acd03391\" id:\"59e2d9e3686f6eed4e0e209b8076e61cbf0b6318fc2c8307066ac5f8acd03391\" pid:4595 exited_at:{seconds:1757720601 nanos:49120612}" Sep 12 23:43:21.057734 containerd[1542]: time="2025-09-12T23:43:21.057698558Z" level=info msg="StartContainer for \"59e2d9e3686f6eed4e0e209b8076e61cbf0b6318fc2c8307066ac5f8acd03391\" returns successfully" Sep 12 23:43:21.068492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59e2d9e3686f6eed4e0e209b8076e61cbf0b6318fc2c8307066ac5f8acd03391-rootfs.mount: Deactivated successfully. Sep 12 23:43:21.974033 containerd[1542]: time="2025-09-12T23:43:21.973971090Z" level=info msg="CreateContainer within sandbox \"bf5a85db5da0cbceb827333d88dc159b389ca32c31f8b6a09b070825c1b33232\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 23:43:21.993604 containerd[1542]: time="2025-09-12T23:43:21.993573297Z" level=info msg="Container 34ba8773723b6040c92e6fb9912b14fb1069e9a5e07021cf889922fed020870a: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:43:22.000031 containerd[1542]: time="2025-09-12T23:43:21.999998886Z" level=info msg="CreateContainer within sandbox \"bf5a85db5da0cbceb827333d88dc159b389ca32c31f8b6a09b070825c1b33232\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"34ba8773723b6040c92e6fb9912b14fb1069e9a5e07021cf889922fed020870a\"" Sep 12 23:43:22.000734 containerd[1542]: time="2025-09-12T23:43:22.000696245Z" level=info msg="StartContainer for \"34ba8773723b6040c92e6fb9912b14fb1069e9a5e07021cf889922fed020870a\"" Sep 12 23:43:22.001734 containerd[1542]: time="2025-09-12T23:43:22.001681203Z" level=info msg="connecting to shim 34ba8773723b6040c92e6fb9912b14fb1069e9a5e07021cf889922fed020870a" address="unix:///run/containerd/s/4a5d48819934d3eb0522b32adef44815b5e47f1e0878552acb46da3ce1e3ad3a" protocol=ttrpc version=3 Sep 12 23:43:22.024432 systemd[1]: Started cri-containerd-34ba8773723b6040c92e6fb9912b14fb1069e9a5e07021cf889922fed020870a.scope - libcontainer container 34ba8773723b6040c92e6fb9912b14fb1069e9a5e07021cf889922fed020870a. Sep 12 23:43:22.063993 containerd[1542]: time="2025-09-12T23:43:22.063526399Z" level=info msg="StartContainer for \"34ba8773723b6040c92e6fb9912b14fb1069e9a5e07021cf889922fed020870a\" returns successfully" Sep 12 23:43:22.116576 containerd[1542]: time="2025-09-12T23:43:22.116523608Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34ba8773723b6040c92e6fb9912b14fb1069e9a5e07021cf889922fed020870a\" id:\"c2680ed49ac05b6eaf9c63172695f6b073e76af3de566af267d028a340da1fe5\" pid:4663 exited_at:{seconds:1757720602 nanos:116283208}" Sep 12 23:43:22.322294 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 12 23:43:22.996313 kubelet[2655]: I0912 23:43:22.996248 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wxc9r" podStartSLOduration=5.996225057 podStartE2EDuration="5.996225057s" podCreationTimestamp="2025-09-12 23:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:43:22.995836778 +0000 UTC m=+70.333642390" watchObservedRunningTime="2025-09-12 23:43:22.996225057 +0000 UTC m=+70.334030629" Sep 12 23:43:24.270199 containerd[1542]: time="2025-09-12T23:43:24.270156632Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34ba8773723b6040c92e6fb9912b14fb1069e9a5e07021cf889922fed020870a\" id:\"5599842e6e68d3d15020055b41c963cb655aae7de2dd216cc7ecc061eea94949\" pid:4891 exit_status:1 exited_at:{seconds:1757720604 nanos:269617952}" Sep 12 23:43:25.113074 systemd-networkd[1422]: lxc_health: Link UP Sep 12 23:43:25.115683 systemd-networkd[1422]: lxc_health: Gained carrier Sep 12 23:43:26.384006 containerd[1542]: time="2025-09-12T23:43:26.383904623Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34ba8773723b6040c92e6fb9912b14fb1069e9a5e07021cf889922fed020870a\" id:\"de05e74887d80966d7548112e50e854726c93ed1e40894bedc962f2e29d53bbd\" pid:5202 exited_at:{seconds:1757720606 nanos:383493303}" Sep 12 23:43:26.386206 kubelet[2655]: E0912 23:43:26.386162 2655 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:38984->127.0.0.1:42333: read tcp 127.0.0.1:38984->127.0.0.1:42333: read: connection reset by peer Sep 12 23:43:26.386832 kubelet[2655]: E0912 23:43:26.386548 2655 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:38984->127.0.0.1:42333: write tcp 127.0.0.1:38984->127.0.0.1:42333: write: broken pipe Sep 12 23:43:26.398476 systemd-networkd[1422]: lxc_health: Gained IPv6LL Sep 12 23:43:28.494572 containerd[1542]: time="2025-09-12T23:43:28.494475915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34ba8773723b6040c92e6fb9912b14fb1069e9a5e07021cf889922fed020870a\" id:\"3e50ad99fef07ffbc031d12c3307e0c98c270f8a113740eefa500b70c463d933\" pid:5229 exited_at:{seconds:1757720608 nanos:494044755}" Sep 12 23:43:30.612401 containerd[1542]: time="2025-09-12T23:43:30.612361991Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34ba8773723b6040c92e6fb9912b14fb1069e9a5e07021cf889922fed020870a\" id:\"5bdd803bd1b16cd4f8ee05e443ccdf50062d0e6fb45015ae4c3eaee29b4eb213\" pid:5263 exited_at:{seconds:1757720610 nanos:612025311}" Sep 12 23:43:30.617714 sshd[4398]: Connection closed by 10.0.0.1 port 55856 Sep 12 23:43:30.619209 sshd-session[4392]: pam_unix(sshd:session): session closed for user core Sep 12 23:43:30.622358 systemd[1]: sshd@23-10.0.0.74:22-10.0.0.1:55856.service: Deactivated successfully. Sep 12 23:43:30.624038 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 23:43:30.624686 systemd-logind[1515]: Session 24 logged out. Waiting for processes to exit. Sep 12 23:43:30.625925 systemd-logind[1515]: Removed session 24.