Dec 12 17:33:32.783089 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 12 17:33:32.783110 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 12 17:33:32.783120 kernel: KASLR enabled Dec 12 17:33:32.783125 kernel: efi: EFI v2.7 by EDK II Dec 12 17:33:32.783131 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Dec 12 17:33:32.783137 kernel: random: crng init done Dec 12 17:33:32.783144 kernel: secureboot: Secure boot disabled Dec 12 17:33:32.783149 kernel: ACPI: Early table checksum verification disabled Dec 12 17:33:32.783155 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Dec 12 17:33:32.783163 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 12 17:33:32.783169 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:33:32.783175 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:33:32.783181 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:33:32.783187 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:33:32.783194 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:33:32.783202 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:33:32.783209 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:33:32.783216 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:33:32.783222 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:33:32.783237 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 12 17:33:32.783244 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:33:32.783250 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:33:32.783257 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Dec 12 17:33:32.783263 kernel: Zone ranges: Dec 12 17:33:32.783270 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:33:32.783278 kernel: DMA32 empty Dec 12 17:33:32.783284 kernel: Normal empty Dec 12 17:33:32.783291 kernel: Device empty Dec 12 17:33:32.783297 kernel: Movable zone start for each node Dec 12 17:33:32.783303 kernel: Early memory node ranges Dec 12 17:33:32.783310 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Dec 12 17:33:32.783316 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Dec 12 17:33:32.783322 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Dec 12 17:33:32.783328 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Dec 12 17:33:32.783335 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Dec 12 17:33:32.783341 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Dec 12 17:33:32.783347 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Dec 12 17:33:32.783355 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Dec 12 17:33:32.783361 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Dec 12 17:33:32.783368 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 12 17:33:32.783377 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 12 17:33:32.783383 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 12 17:33:32.783390 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 12 17:33:32.783398 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:33:32.783405 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 12 17:33:32.783412 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Dec 12 17:33:32.783419 kernel: psci: probing for conduit method from ACPI. Dec 12 17:33:32.783425 kernel: psci: PSCIv1.1 detected in firmware. Dec 12 17:33:32.783432 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:33:32.783456 kernel: psci: Trusted OS migration not required Dec 12 17:33:32.783465 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:33:32.783472 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 12 17:33:32.783481 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:33:32.783492 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:33:32.783499 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 12 17:33:32.783506 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:33:32.783512 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:33:32.783519 kernel: CPU features: detected: Spectre-v4 Dec 12 17:33:32.783526 kernel: CPU features: detected: Spectre-BHB Dec 12 17:33:32.783533 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:33:32.783539 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:33:32.783546 kernel: CPU features: detected: ARM erratum 1418040 Dec 12 17:33:32.783553 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:33:32.783560 kernel: alternatives: applying boot alternatives Dec 12 17:33:32.783567 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:33:32.783576 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:33:32.783583 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:33:32.783590 kernel: Fallback order for Node 0: 0 Dec 12 17:33:32.783597 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 12 17:33:32.783603 kernel: Policy zone: DMA Dec 12 17:33:32.783610 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:33:32.783617 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 12 17:33:32.783624 kernel: software IO TLB: area num 4. Dec 12 17:33:32.783630 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 12 17:33:32.783637 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Dec 12 17:33:32.783644 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 17:33:32.783652 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:33:32.783659 kernel: rcu: RCU event tracing is enabled. Dec 12 17:33:32.783666 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 17:33:32.783673 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:33:32.783680 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:33:32.783687 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:33:32.783694 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 17:33:32.783701 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:33:32.783707 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:33:32.783714 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:33:32.783721 kernel: GICv3: 256 SPIs implemented Dec 12 17:33:32.783729 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:33:32.783736 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:33:32.783742 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 12 17:33:32.783749 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:33:32.783756 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 12 17:33:32.783762 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 12 17:33:32.783769 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:33:32.783776 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:33:32.783783 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 12 17:33:32.783789 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 12 17:33:32.783796 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:33:32.783803 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:33:32.783812 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 12 17:33:32.783819 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 12 17:33:32.783826 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 12 17:33:32.783833 kernel: arm-pv: using stolen time PV Dec 12 17:33:32.783840 kernel: Console: colour dummy device 80x25 Dec 12 17:33:32.783847 kernel: ACPI: Core revision 20240827 Dec 12 17:33:32.783854 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 12 17:33:32.783862 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:33:32.783869 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:33:32.783876 kernel: landlock: Up and running. Dec 12 17:33:32.783884 kernel: SELinux: Initializing. Dec 12 17:33:32.783891 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:33:32.783898 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:33:32.783905 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:33:32.783912 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:33:32.783919 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:33:32.783926 kernel: Remapping and enabling EFI services. Dec 12 17:33:32.783933 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:33:32.783940 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:33:32.783952 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 12 17:33:32.783960 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 12 17:33:32.783967 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:33:32.783976 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 12 17:33:32.783983 kernel: Detected PIPT I-cache on CPU2 Dec 12 17:33:32.783990 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 12 17:33:32.783998 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 12 17:33:32.784005 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:33:32.784014 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 12 17:33:32.784021 kernel: Detected PIPT I-cache on CPU3 Dec 12 17:33:32.784029 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 12 17:33:32.784036 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 12 17:33:32.784043 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:33:32.784050 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 12 17:33:32.784058 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 17:33:32.784065 kernel: SMP: Total of 4 processors activated. Dec 12 17:33:32.784072 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:33:32.784081 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:33:32.784088 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:33:32.784096 kernel: CPU features: detected: Common not Private translations Dec 12 17:33:32.784103 kernel: CPU features: detected: CRC32 instructions Dec 12 17:33:32.784110 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 12 17:33:32.784118 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:33:32.784125 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:33:32.784132 kernel: CPU features: detected: Privileged Access Never Dec 12 17:33:32.784140 kernel: CPU features: detected: RAS Extension Support Dec 12 17:33:32.784147 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:33:32.784156 kernel: alternatives: applying system-wide alternatives Dec 12 17:33:32.784163 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 12 17:33:32.784170 kernel: Memory: 2423776K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 126176K reserved, 16384K cma-reserved) Dec 12 17:33:32.784178 kernel: devtmpfs: initialized Dec 12 17:33:32.784185 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:33:32.784193 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 17:33:32.784200 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:33:32.784207 kernel: 0 pages in range for non-PLT usage Dec 12 17:33:32.784216 kernel: 508400 pages in range for PLT usage Dec 12 17:33:32.784223 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:33:32.784236 kernel: SMBIOS 3.0.0 present. Dec 12 17:33:32.784244 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 12 17:33:32.784251 kernel: DMI: Memory slots populated: 1/1 Dec 12 17:33:32.784259 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:33:32.784267 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:33:32.784275 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:33:32.784282 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:33:32.784291 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:33:32.784298 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Dec 12 17:33:32.784305 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:33:32.784313 kernel: cpuidle: using governor menu Dec 12 17:33:32.784320 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:33:32.784327 kernel: ASID allocator initialised with 32768 entries Dec 12 17:33:32.784335 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:33:32.784343 kernel: Serial: AMBA PL011 UART driver Dec 12 17:33:32.784350 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:33:32.784359 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:33:32.784367 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:33:32.784374 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:33:32.784382 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:33:32.784389 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:33:32.784397 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:33:32.784404 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:33:32.784412 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:33:32.784419 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:33:32.784428 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:33:32.784436 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:33:32.784456 kernel: ACPI: Interpreter enabled Dec 12 17:33:32.784464 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:33:32.784472 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:33:32.784480 kernel: ACPI: CPU0 has been hot-added Dec 12 17:33:32.784487 kernel: ACPI: CPU1 has been hot-added Dec 12 17:33:32.784495 kernel: ACPI: CPU2 has been hot-added Dec 12 17:33:32.784502 kernel: ACPI: CPU3 has been hot-added Dec 12 17:33:32.784510 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:33:32.784520 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:33:32.784528 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 17:33:32.784663 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:33:32.784749 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:33:32.784812 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:33:32.784874 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 12 17:33:32.784935 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 12 17:33:32.784947 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 12 17:33:32.784955 kernel: PCI host bridge to bus 0000:00 Dec 12 17:33:32.785022 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 12 17:33:32.785078 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:33:32.785133 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 12 17:33:32.785187 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 17:33:32.785280 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:33:32.785358 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 17:33:32.785422 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 12 17:33:32.785500 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 12 17:33:32.785564 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 12 17:33:32.785631 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 12 17:33:32.785695 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 12 17:33:32.785762 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 12 17:33:32.785825 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 12 17:33:32.785882 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:33:32.785938 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 12 17:33:32.785948 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:33:32.785955 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:33:32.785963 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:33:32.785970 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:33:32.785979 kernel: iommu: Default domain type: Translated Dec 12 17:33:32.785987 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:33:32.785994 kernel: efivars: Registered efivars operations Dec 12 17:33:32.786001 kernel: vgaarb: loaded Dec 12 17:33:32.786009 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:33:32.786016 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:33:32.786023 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:33:32.786030 kernel: pnp: PnP ACPI init Dec 12 17:33:32.786104 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 12 17:33:32.786116 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:33:32.786124 kernel: NET: Registered PF_INET protocol family Dec 12 17:33:32.786131 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:33:32.786139 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:33:32.786146 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:33:32.786153 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:33:32.786161 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:33:32.786168 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:33:32.786177 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:33:32.786184 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:33:32.786191 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:33:32.786199 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:33:32.786206 kernel: kvm [1]: HYP mode not available Dec 12 17:33:32.786213 kernel: Initialise system trusted keyrings Dec 12 17:33:32.786221 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:33:32.786235 kernel: Key type asymmetric registered Dec 12 17:33:32.786244 kernel: Asymmetric key parser 'x509' registered Dec 12 17:33:32.786253 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:33:32.786261 kernel: io scheduler mq-deadline registered Dec 12 17:33:32.786268 kernel: io scheduler kyber registered Dec 12 17:33:32.786275 kernel: io scheduler bfq registered Dec 12 17:33:32.786283 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:33:32.786290 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:33:32.786307 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:33:32.786380 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 12 17:33:32.786390 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:33:32.786400 kernel: thunder_xcv, ver 1.0 Dec 12 17:33:32.786407 kernel: thunder_bgx, ver 1.0 Dec 12 17:33:32.786415 kernel: nicpf, ver 1.0 Dec 12 17:33:32.786422 kernel: nicvf, ver 1.0 Dec 12 17:33:32.786507 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:33:32.786568 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:33:32 UTC (1765560812) Dec 12 17:33:32.786578 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:33:32.786586 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:33:32.786597 kernel: watchdog: NMI not fully supported Dec 12 17:33:32.786604 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:33:32.786612 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:33:32.786620 kernel: Segment Routing with IPv6 Dec 12 17:33:32.786627 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:33:32.786634 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:33:32.786641 kernel: Key type dns_resolver registered Dec 12 17:33:32.786649 kernel: registered taskstats version 1 Dec 12 17:33:32.786656 kernel: Loading compiled-in X.509 certificates Dec 12 17:33:32.786664 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 12 17:33:32.786673 kernel: Demotion targets for Node 0: null Dec 12 17:33:32.786680 kernel: Key type .fscrypt registered Dec 12 17:33:32.786687 kernel: Key type fscrypt-provisioning registered Dec 12 17:33:32.786695 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:33:32.786702 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:33:32.786709 kernel: ima: No architecture policies found Dec 12 17:33:32.786717 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:33:32.786724 kernel: clk: Disabling unused clocks Dec 12 17:33:32.786732 kernel: PM: genpd: Disabling unused power domains Dec 12 17:33:32.786741 kernel: Warning: unable to open an initial console. Dec 12 17:33:32.786748 kernel: Freeing unused kernel memory: 39552K Dec 12 17:33:32.786756 kernel: Run /init as init process Dec 12 17:33:32.786763 kernel: with arguments: Dec 12 17:33:32.786771 kernel: /init Dec 12 17:33:32.786778 kernel: with environment: Dec 12 17:33:32.786785 kernel: HOME=/ Dec 12 17:33:32.786793 kernel: TERM=linux Dec 12 17:33:32.786801 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:33:32.786813 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:33:32.786821 systemd[1]: Detected virtualization kvm. Dec 12 17:33:32.786829 systemd[1]: Detected architecture arm64. Dec 12 17:33:32.786836 systemd[1]: Running in initrd. Dec 12 17:33:32.786844 systemd[1]: No hostname configured, using default hostname. Dec 12 17:33:32.786852 systemd[1]: Hostname set to . Dec 12 17:33:32.786859 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:33:32.786869 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:33:32.786877 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:33:32.786885 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:33:32.786893 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:33:32.786901 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:33:32.786909 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:33:32.786917 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:33:32.786928 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 17:33:32.786936 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 17:33:32.786944 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:33:32.786952 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:33:32.786959 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:33:32.786967 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:33:32.786975 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:33:32.786983 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:33:32.786992 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:33:32.787000 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:33:32.787008 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:33:32.787016 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:33:32.787024 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:33:32.787032 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:33:32.787040 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:33:32.787048 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:33:32.787056 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:33:32.787065 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:33:32.787073 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:33:32.787081 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:33:32.787089 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:33:32.787097 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:33:32.787105 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:33:32.787113 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:33:32.787121 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:33:32.787131 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:33:32.787139 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:33:32.787147 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:33:32.787155 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:33:32.787179 systemd-journald[245]: Collecting audit messages is disabled. Dec 12 17:33:32.787198 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:33:32.787206 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:33:32.787215 systemd-journald[245]: Journal started Dec 12 17:33:32.787242 systemd-journald[245]: Runtime Journal (/run/log/journal/1510ed3ac5a74213b5d92d07c565712c) is 6M, max 48.5M, 42.4M free. Dec 12 17:33:32.795525 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:33:32.795556 kernel: Bridge firewalling registered Dec 12 17:33:32.777347 systemd-modules-load[246]: Inserted module 'overlay' Dec 12 17:33:32.793326 systemd-modules-load[246]: Inserted module 'br_netfilter' Dec 12 17:33:32.800050 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:33:32.800071 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:33:32.801758 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:33:32.805381 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:33:32.807107 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:33:32.808551 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:33:32.821851 systemd-tmpfiles[272]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:33:32.824084 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:33:32.826457 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:33:32.827618 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:33:32.830738 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:33:32.832987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:33:32.852093 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:33:32.866326 systemd-resolved[291]: Positive Trust Anchors: Dec 12 17:33:32.866339 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:33:32.866371 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:33:32.871401 systemd-resolved[291]: Defaulting to hostname 'linux'. Dec 12 17:33:32.872415 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:33:32.876242 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:33:32.932469 kernel: SCSI subsystem initialized Dec 12 17:33:32.936469 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:33:32.944471 kernel: iscsi: registered transport (tcp) Dec 12 17:33:32.957471 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:33:32.957521 kernel: QLogic iSCSI HBA Driver Dec 12 17:33:32.973725 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:33:32.988169 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:33:32.989732 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:33:33.039387 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:33:33.041727 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:33:33.126487 kernel: raid6: neonx8 gen() 11043 MB/s Dec 12 17:33:33.143473 kernel: raid6: neonx4 gen() 15621 MB/s Dec 12 17:33:33.160467 kernel: raid6: neonx2 gen() 13117 MB/s Dec 12 17:33:33.177477 kernel: raid6: neonx1 gen() 10272 MB/s Dec 12 17:33:33.194467 kernel: raid6: int64x8 gen() 6815 MB/s Dec 12 17:33:33.211478 kernel: raid6: int64x4 gen() 6933 MB/s Dec 12 17:33:33.228467 kernel: raid6: int64x2 gen() 6087 MB/s Dec 12 17:33:33.245494 kernel: raid6: int64x1 gen() 5028 MB/s Dec 12 17:33:33.245516 kernel: raid6: using algorithm neonx4 gen() 15621 MB/s Dec 12 17:33:33.263474 kernel: raid6: .... xor() 12349 MB/s, rmw enabled Dec 12 17:33:33.263507 kernel: raid6: using neon recovery algorithm Dec 12 17:33:33.270640 kernel: xor: measuring software checksum speed Dec 12 17:33:33.270689 kernel: 8regs : 21641 MB/sec Dec 12 17:33:33.271850 kernel: 32regs : 21670 MB/sec Dec 12 17:33:33.271865 kernel: arm64_neon : 28118 MB/sec Dec 12 17:33:33.271875 kernel: xor: using function: arm64_neon (28118 MB/sec) Dec 12 17:33:33.326482 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:33:33.335019 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:33:33.339690 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:33:33.367943 systemd-udevd[499]: Using default interface naming scheme 'v255'. Dec 12 17:33:33.372365 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:33:33.375591 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:33:33.420398 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Dec 12 17:33:33.448202 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:33:33.450691 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:33:33.502481 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:33:33.505653 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:33:33.558017 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 12 17:33:33.558286 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 12 17:33:33.564577 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:33:33.564619 kernel: GPT:9289727 != 19775487 Dec 12 17:33:33.564630 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:33:33.565721 kernel: GPT:9289727 != 19775487 Dec 12 17:33:33.565757 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:33:33.567377 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:33:33.594956 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 17:33:33.617782 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 17:33:33.632625 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:33:33.639400 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 12 17:33:33.640601 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 17:33:33.645864 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:33:33.646907 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:33:33.646978 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:33:33.650304 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:33:33.658026 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:33:33.659343 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:33:33.662878 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:33:33.664367 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:33:33.668970 disk-uuid[585]: Primary Header is updated. Dec 12 17:33:33.668970 disk-uuid[585]: Secondary Entries is updated. Dec 12 17:33:33.668970 disk-uuid[585]: Secondary Header is updated. Dec 12 17:33:33.675315 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:33:33.665808 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:33:33.669771 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:33:33.702988 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:33:33.728177 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:33:34.682478 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:33:34.682988 disk-uuid[589]: The operation has completed successfully. Dec 12 17:33:34.710367 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:33:34.710478 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:33:34.737079 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 17:33:34.751329 sh[613]: Success Dec 12 17:33:34.763596 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:33:34.763644 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:33:34.764792 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:33:34.771459 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:33:34.795293 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:33:34.798313 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 17:33:34.812787 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 17:33:34.819122 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (627) Dec 12 17:33:34.819150 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 12 17:33:34.819161 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:33:34.824129 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:33:34.824157 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:33:34.825114 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 17:33:34.826366 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:33:34.827763 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:33:34.828514 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:33:34.829995 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:33:34.855495 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (657) Dec 12 17:33:34.858210 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:33:34.858250 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:33:34.860819 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:33:34.860862 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:33:34.865460 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:33:34.866280 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:33:34.868516 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:33:34.935752 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:33:34.941140 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:33:34.979064 systemd-networkd[800]: lo: Link UP Dec 12 17:33:34.979074 systemd-networkd[800]: lo: Gained carrier Dec 12 17:33:34.980627 systemd-networkd[800]: Enumeration completed Dec 12 17:33:34.981812 ignition[705]: Ignition 2.22.0 Dec 12 17:33:34.980729 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:33:34.981818 ignition[705]: Stage: fetch-offline Dec 12 17:33:34.981764 systemd[1]: Reached target network.target - Network. Dec 12 17:33:34.981844 ignition[705]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:33:34.983601 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:33:34.981851 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:33:34.983604 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:33:34.981922 ignition[705]: parsed url from cmdline: "" Dec 12 17:33:34.984026 systemd-networkd[800]: eth0: Link UP Dec 12 17:33:34.981925 ignition[705]: no config URL provided Dec 12 17:33:34.984317 systemd-networkd[800]: eth0: Gained carrier Dec 12 17:33:34.981929 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:33:34.984327 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:33:34.981935 ignition[705]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:33:34.981953 ignition[705]: op(1): [started] loading QEMU firmware config module Dec 12 17:33:34.981957 ignition[705]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 12 17:33:35.000488 systemd-networkd[800]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:33:34.987539 ignition[705]: op(1): [finished] loading QEMU firmware config module Dec 12 17:33:35.039188 ignition[705]: parsing config with SHA512: 9fc7a0f95ce37258d1523202fd82e22e3c5bcd473a3161d34cbade5c87bff7e4391a7f3e9fba305e95e0c66af06324f130446dfbb90b5b8553f55f4c13794801 Dec 12 17:33:35.045336 unknown[705]: fetched base config from "system" Dec 12 17:33:35.045349 unknown[705]: fetched user config from "qemu" Dec 12 17:33:35.045770 ignition[705]: fetch-offline: fetch-offline passed Dec 12 17:33:35.045825 ignition[705]: Ignition finished successfully Dec 12 17:33:35.051812 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:33:35.053091 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 12 17:33:35.053952 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:33:35.116051 ignition[813]: Ignition 2.22.0 Dec 12 17:33:35.116069 ignition[813]: Stage: kargs Dec 12 17:33:35.116199 ignition[813]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:33:35.116208 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:33:35.117030 ignition[813]: kargs: kargs passed Dec 12 17:33:35.117073 ignition[813]: Ignition finished successfully Dec 12 17:33:35.120557 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:33:35.123328 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:33:35.157905 ignition[821]: Ignition 2.22.0 Dec 12 17:33:35.158995 ignition[821]: Stage: disks Dec 12 17:33:35.159689 ignition[821]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:33:35.159699 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:33:35.160506 ignition[821]: disks: disks passed Dec 12 17:33:35.162496 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:33:35.160553 ignition[821]: Ignition finished successfully Dec 12 17:33:35.165954 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:33:35.167029 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:33:35.168917 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:33:35.170679 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:33:35.172331 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:33:35.174800 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:33:35.195970 systemd-fsck[832]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 17:33:35.215564 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:33:35.221071 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:33:35.280460 kernel: EXT4-fs (vda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 12 17:33:35.280932 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:33:35.282152 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:33:35.284688 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:33:35.286280 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:33:35.287309 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:33:35.287348 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:33:35.287387 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:33:35.302002 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:33:35.305355 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:33:35.313456 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (840) Dec 12 17:33:35.317018 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:33:35.317054 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:33:35.319777 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:33:35.319828 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:33:35.321529 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:33:35.341930 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:33:35.345102 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:33:35.348197 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:33:35.352192 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:33:35.418357 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:33:35.420343 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:33:35.421879 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:33:35.441483 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:33:35.460566 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:33:35.475426 ignition[953]: INFO : Ignition 2.22.0 Dec 12 17:33:35.475426 ignition[953]: INFO : Stage: mount Dec 12 17:33:35.477003 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:33:35.477003 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:33:35.477003 ignition[953]: INFO : mount: mount passed Dec 12 17:33:35.477003 ignition[953]: INFO : Ignition finished successfully Dec 12 17:33:35.480721 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:33:35.483402 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:33:35.817731 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:33:35.819374 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:33:35.849475 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Dec 12 17:33:35.851672 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:33:35.851697 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:33:35.854464 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:33:35.854482 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:33:35.855737 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:33:35.882957 ignition[983]: INFO : Ignition 2.22.0 Dec 12 17:33:35.882957 ignition[983]: INFO : Stage: files Dec 12 17:33:35.884582 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:33:35.884582 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:33:35.884582 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:33:35.887903 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:33:35.887903 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:33:35.887903 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:33:35.887903 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:33:35.887903 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:33:35.887110 unknown[983]: wrote ssh authorized keys file for user: core Dec 12 17:33:35.895206 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:33:35.895206 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 12 17:33:35.941261 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:33:36.200557 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:33:36.200557 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 17:33:36.200557 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 12 17:33:36.379236 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 12 17:33:36.462822 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 17:33:36.462822 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:33:36.466581 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:33:36.466581 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:33:36.466581 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:33:36.466581 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:33:36.466581 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:33:36.466581 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:33:36.466581 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:33:36.485061 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:33:36.486991 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:33:36.486991 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:33:36.520614 systemd-networkd[800]: eth0: Gained IPv6LL Dec 12 17:33:36.547544 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:33:36.547544 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:33:36.552150 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 12 17:33:36.822947 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 12 17:33:37.169748 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:33:37.169748 ignition[983]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 12 17:33:37.175509 ignition[983]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:33:37.175509 ignition[983]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:33:37.175509 ignition[983]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 12 17:33:37.175509 ignition[983]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 12 17:33:37.175509 ignition[983]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:33:37.175509 ignition[983]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:33:37.175509 ignition[983]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 12 17:33:37.175509 ignition[983]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 12 17:33:37.189951 ignition[983]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:33:37.193507 ignition[983]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:33:37.194908 ignition[983]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 12 17:33:37.194908 ignition[983]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:33:37.194908 ignition[983]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:33:37.194908 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:33:37.194908 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:33:37.194908 ignition[983]: INFO : files: files passed Dec 12 17:33:37.194908 ignition[983]: INFO : Ignition finished successfully Dec 12 17:33:37.197476 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:33:37.201493 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:33:37.205124 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:33:37.223175 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:33:37.223300 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:33:37.229221 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Dec 12 17:33:37.230538 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:33:37.230538 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:33:37.233573 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:33:37.233525 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:33:37.234989 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:33:37.237703 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:33:37.269680 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:33:37.269814 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:33:37.272780 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:33:37.273765 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:33:37.275716 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:33:37.276535 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:33:37.299341 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:33:37.301819 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:33:37.330009 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:33:37.331264 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:33:37.333280 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:33:37.335045 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:33:37.335172 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:33:37.337551 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:33:37.339458 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:33:37.341091 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:33:37.342771 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:33:37.344543 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:33:37.346497 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:33:37.348395 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:33:37.350186 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:33:37.352069 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:33:37.353959 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:33:37.355700 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:33:37.357172 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:33:37.357316 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:33:37.359475 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:33:37.361332 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:33:37.363183 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:33:37.364093 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:33:37.365316 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:33:37.365455 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:33:37.368129 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:33:37.368261 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:33:37.370033 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:33:37.371462 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:33:37.372283 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:33:37.373561 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:33:37.375295 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:33:37.376763 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:33:37.376844 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:33:37.378371 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:33:37.378456 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:33:37.380454 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:33:37.380576 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:33:37.382228 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:33:37.382331 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:33:37.384645 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:33:37.386749 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:33:37.388304 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:33:37.388418 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:33:37.390386 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:33:37.390494 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:33:37.396392 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:33:37.399593 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:33:37.412890 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:33:37.417235 ignition[1039]: INFO : Ignition 2.22.0 Dec 12 17:33:37.417235 ignition[1039]: INFO : Stage: umount Dec 12 17:33:37.419166 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:33:37.419166 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:33:37.419166 ignition[1039]: INFO : umount: umount passed Dec 12 17:33:37.419166 ignition[1039]: INFO : Ignition finished successfully Dec 12 17:33:37.423697 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:33:37.423813 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:33:37.425466 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:33:37.425543 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:33:37.427513 systemd[1]: Stopped target network.target - Network. Dec 12 17:33:37.428555 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:33:37.428654 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:33:37.430342 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:33:37.430389 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:33:37.432036 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:33:37.432086 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:33:37.433700 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:33:37.433740 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:33:37.435468 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:33:37.435522 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:33:37.437517 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:33:37.439290 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:33:37.443785 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:33:37.443916 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:33:37.448066 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 17:33:37.448341 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:33:37.448382 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:33:37.452171 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:33:37.454351 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:33:37.454523 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:33:37.457383 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 17:33:37.457561 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:33:37.458870 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:33:37.458905 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:33:37.462007 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:33:37.463940 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:33:37.464002 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:33:37.467511 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:33:37.467567 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:33:37.469548 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:33:37.469595 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:33:37.471604 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:33:37.475178 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 17:33:37.494703 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:33:37.494885 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:33:37.497960 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:33:37.498099 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:33:37.499998 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:33:37.500066 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:33:37.501435 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:33:37.501562 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:33:37.503787 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:33:37.503841 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:33:37.508246 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:33:37.508304 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:33:37.510552 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:33:37.510596 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:33:37.514134 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:33:37.516550 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:33:37.516609 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:33:37.519954 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:33:37.520005 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:33:37.522831 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 17:33:37.522885 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:33:37.526242 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:33:37.526295 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:33:37.528630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:33:37.528675 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:33:37.538971 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:33:37.539079 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:33:37.540786 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:33:37.543248 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:33:37.573409 systemd[1]: Switching root. Dec 12 17:33:37.615852 systemd-journald[245]: Journal stopped Dec 12 17:33:38.459212 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Dec 12 17:33:38.459261 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:33:38.459282 kernel: SELinux: policy capability open_perms=1 Dec 12 17:33:38.459292 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:33:38.459307 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:33:38.459320 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:33:38.459330 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:33:38.459340 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:33:38.459349 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:33:38.459358 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:33:38.459372 kernel: audit: type=1403 audit(1765560817.818:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 17:33:38.459385 systemd[1]: Successfully loaded SELinux policy in 59.538ms. Dec 12 17:33:38.459397 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.510ms. Dec 12 17:33:38.459426 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:33:38.459465 systemd[1]: Detected virtualization kvm. Dec 12 17:33:38.459496 systemd[1]: Detected architecture arm64. Dec 12 17:33:38.459512 systemd[1]: Detected first boot. Dec 12 17:33:38.459523 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:33:38.459533 zram_generator::config[1086]: No configuration found. Dec 12 17:33:38.459547 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:33:38.459557 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:33:38.459570 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 17:33:38.459580 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:33:38.459591 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:33:38.459602 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:33:38.459613 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:33:38.459624 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:33:38.459635 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:33:38.459646 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:33:38.459656 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:33:38.459668 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:33:38.459678 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:33:38.459688 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:33:38.459699 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:33:38.459710 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:33:38.459720 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:33:38.459730 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:33:38.459741 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:33:38.459751 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:33:38.459763 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:33:38.459774 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:33:38.459784 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:33:38.459794 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:33:38.459805 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:33:38.459815 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:33:38.459826 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:33:38.459838 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:33:38.459848 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:33:38.459858 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:33:38.459868 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:33:38.459906 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:33:38.459919 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:33:38.459930 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:33:38.459941 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:33:38.459952 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:33:38.459962 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:33:38.459975 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:33:38.459985 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:33:38.459996 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:33:38.460006 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:33:38.460016 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:33:38.460026 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:33:38.460036 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:33:38.460047 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:33:38.460059 systemd[1]: Reached target machines.target - Containers. Dec 12 17:33:38.460070 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:33:38.460081 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:33:38.460091 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:33:38.460102 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:33:38.460112 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:33:38.460122 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:33:38.460132 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:33:38.460142 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:33:38.460154 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:33:38.460165 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:33:38.460176 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:33:38.460187 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:33:38.460197 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:33:38.460213 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:33:38.460226 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:33:38.460236 kernel: fuse: init (API version 7.41) Dec 12 17:33:38.460250 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:33:38.460260 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:33:38.460270 kernel: ACPI: bus type drm_connector registered Dec 12 17:33:38.460279 kernel: loop: module loaded Dec 12 17:33:38.460289 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:33:38.460300 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:33:38.460314 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:33:38.460324 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:33:38.460334 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 17:33:38.460344 systemd[1]: Stopped verity-setup.service. Dec 12 17:33:38.460354 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:33:38.460364 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:33:38.460420 systemd-journald[1158]: Collecting audit messages is disabled. Dec 12 17:33:38.460458 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:33:38.460470 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:33:38.460482 systemd-journald[1158]: Journal started Dec 12 17:33:38.460502 systemd-journald[1158]: Runtime Journal (/run/log/journal/1510ed3ac5a74213b5d92d07c565712c) is 6M, max 48.5M, 42.4M free. Dec 12 17:33:38.199968 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:33:38.222501 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 17:33:38.222909 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:33:38.464253 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:33:38.464964 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:33:38.466195 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:33:38.467487 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:33:38.468882 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:33:38.470543 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:33:38.470710 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:33:38.472118 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:33:38.472288 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:33:38.473733 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:33:38.473883 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:33:38.475244 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:33:38.475412 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:33:38.478787 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:33:38.478949 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:33:38.480299 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:33:38.480484 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:33:38.481843 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:33:38.483494 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:33:38.485008 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:33:38.486578 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:33:38.499264 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:33:38.501679 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:33:38.503766 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:33:38.504979 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:33:38.505014 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:33:38.506878 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:33:38.515264 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:33:38.516546 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:33:38.517725 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:33:38.519735 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:33:38.521011 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:33:38.522072 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:33:38.523338 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:33:38.527624 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:33:38.527846 systemd-journald[1158]: Time spent on flushing to /var/log/journal/1510ed3ac5a74213b5d92d07c565712c is 45.495ms for 887 entries. Dec 12 17:33:38.527846 systemd-journald[1158]: System Journal (/var/log/journal/1510ed3ac5a74213b5d92d07c565712c) is 8M, max 195.6M, 187.6M free. Dec 12 17:33:38.584350 systemd-journald[1158]: Received client request to flush runtime journal. Dec 12 17:33:38.584404 kernel: loop0: detected capacity change from 0 to 211168 Dec 12 17:33:38.584426 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:33:38.584465 kernel: loop1: detected capacity change from 0 to 119840 Dec 12 17:33:38.531602 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:33:38.535752 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:33:38.539255 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:33:38.545001 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:33:38.547879 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:33:38.550632 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:33:38.554489 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:33:38.558879 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Dec 12 17:33:38.559436 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Dec 12 17:33:38.560693 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:33:38.564192 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:33:38.569122 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:33:38.570535 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:33:38.590259 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:33:38.593008 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:33:38.597863 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:33:38.601629 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:33:38.617506 kernel: loop2: detected capacity change from 0 to 100632 Dec 12 17:33:38.619053 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Dec 12 17:33:38.619072 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Dec 12 17:33:38.622159 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:33:38.643480 kernel: loop3: detected capacity change from 0 to 211168 Dec 12 17:33:38.650468 kernel: loop4: detected capacity change from 0 to 119840 Dec 12 17:33:38.656457 kernel: loop5: detected capacity change from 0 to 100632 Dec 12 17:33:38.662198 (sd-merge)[1228]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 12 17:33:38.662678 (sd-merge)[1228]: Merged extensions into '/usr'. Dec 12 17:33:38.666626 systemd[1]: Reload requested from client PID 1203 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:33:38.666798 systemd[1]: Reloading... Dec 12 17:33:38.725722 zram_generator::config[1251]: No configuration found. Dec 12 17:33:38.813399 ldconfig[1198]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:33:38.870186 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:33:38.870485 systemd[1]: Reloading finished in 202 ms. Dec 12 17:33:38.904088 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:33:38.905535 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:33:38.922057 systemd[1]: Starting ensure-sysext.service... Dec 12 17:33:38.923848 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:33:38.933326 systemd[1]: Reload requested from client PID 1289 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:33:38.933345 systemd[1]: Reloading... Dec 12 17:33:38.944240 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:33:38.944270 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:33:38.944884 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:33:38.945164 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 17:33:38.945881 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 17:33:38.946182 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Dec 12 17:33:38.946304 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Dec 12 17:33:38.959275 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:33:38.959413 systemd-tmpfiles[1290]: Skipping /boot Dec 12 17:33:38.968645 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:33:38.968744 systemd-tmpfiles[1290]: Skipping /boot Dec 12 17:33:38.978466 zram_generator::config[1317]: No configuration found. Dec 12 17:33:39.107343 systemd[1]: Reloading finished in 173 ms. Dec 12 17:33:39.127575 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:33:39.133106 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:33:39.149541 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:33:39.152745 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:33:39.159316 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:33:39.162339 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:33:39.166616 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:33:39.168642 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:33:39.181801 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:33:39.184992 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:33:39.186553 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:33:39.192000 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:33:39.195401 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:33:39.196540 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:33:39.196661 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:33:39.198025 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:33:39.200048 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:33:39.200416 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:33:39.206188 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:33:39.206358 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:33:39.211046 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:33:39.214793 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:33:39.217020 augenrules[1385]: No rules Dec 12 17:33:39.218036 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:33:39.219313 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:33:39.219458 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:33:39.222685 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:33:39.225952 systemd-udevd[1358]: Using default interface naming scheme 'v255'. Dec 12 17:33:39.226286 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:33:39.226490 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:33:39.228307 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:33:39.230181 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:33:39.237861 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:33:39.240175 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:33:39.242096 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:33:39.242291 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:33:39.244277 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:33:39.245615 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:33:39.247248 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:33:39.249887 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:33:39.278113 systemd[1]: Finished ensure-sysext.service. Dec 12 17:33:39.279733 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:33:39.297149 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:33:39.299750 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:33:39.301626 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:33:39.305653 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:33:39.307604 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:33:39.311678 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:33:39.312858 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:33:39.312919 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:33:39.317339 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:33:39.325339 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 17:33:39.328602 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:33:39.331074 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:33:39.331296 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:33:39.334532 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:33:39.334712 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:33:39.338821 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:33:39.339675 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:33:39.343031 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:33:39.343199 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:33:39.358745 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:33:39.358821 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:33:39.363139 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:33:39.370005 augenrules[1434]: /sbin/augenrules: No change Dec 12 17:33:39.380133 systemd-resolved[1356]: Positive Trust Anchors: Dec 12 17:33:39.380148 systemd-resolved[1356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:33:39.380179 systemd-resolved[1356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:33:39.387737 systemd-resolved[1356]: Defaulting to hostname 'linux'. Dec 12 17:33:39.389562 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:33:39.391644 augenrules[1465]: No rules Dec 12 17:33:39.393877 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:33:39.394518 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:33:39.401717 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:33:39.403857 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:33:39.407646 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:33:39.431629 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:33:39.445472 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 17:33:39.446761 systemd-networkd[1439]: lo: Link UP Dec 12 17:33:39.447043 systemd-networkd[1439]: lo: Gained carrier Dec 12 17:33:39.447235 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:33:39.448064 systemd-networkd[1439]: Enumeration completed Dec 12 17:33:39.448699 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:33:39.448708 systemd-networkd[1439]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:33:39.449098 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:33:39.449495 systemd-networkd[1439]: eth0: Link UP Dec 12 17:33:39.449706 systemd-networkd[1439]: eth0: Gained carrier Dec 12 17:33:39.449775 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:33:39.450619 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:33:39.452115 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:33:39.453449 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:33:39.453487 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:33:39.454365 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:33:39.455588 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:33:39.456806 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:33:39.458133 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:33:39.461380 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:33:39.464117 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:33:39.464522 systemd-networkd[1439]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:33:39.467982 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:33:39.470683 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:33:39.471625 systemd-timesyncd[1440]: Network configuration changed, trying to establish connection. Dec 12 17:33:39.471985 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:33:39.474736 systemd-timesyncd[1440]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 12 17:33:39.474788 systemd-timesyncd[1440]: Initial clock synchronization to Fri 2025-12-12 17:33:39.479793 UTC. Dec 12 17:33:39.475539 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:33:39.476888 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:33:39.479311 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:33:39.481789 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:33:39.483163 systemd[1]: Reached target network.target - Network. Dec 12 17:33:39.484168 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:33:39.485195 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:33:39.486666 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:33:39.486698 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:33:39.489673 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:33:39.493751 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:33:39.508059 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:33:39.512701 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:33:39.517861 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:33:39.519122 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:33:39.521718 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:33:39.521914 jq[1499]: false Dec 12 17:33:39.524489 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:33:39.531610 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:33:39.533892 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:33:39.537716 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:33:39.539784 extend-filesystems[1500]: Found /dev/vda6 Dec 12 17:33:39.540694 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:33:39.544731 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:33:39.544872 extend-filesystems[1500]: Found /dev/vda9 Dec 12 17:33:39.546840 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:33:39.547820 extend-filesystems[1500]: Checking size of /dev/vda9 Dec 12 17:33:39.550766 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:33:39.552154 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:33:39.556564 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:33:39.559347 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:33:39.560938 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:33:39.561128 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:33:39.561453 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:33:39.561632 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:33:39.567158 extend-filesystems[1500]: Resized partition /dev/vda9 Dec 12 17:33:39.573323 extend-filesystems[1528]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:33:39.585668 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 12 17:33:39.571236 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:33:39.571435 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:33:39.586013 jq[1521]: true Dec 12 17:33:39.597473 tar[1527]: linux-arm64/LICENSE Dec 12 17:33:39.597473 tar[1527]: linux-arm64/helm Dec 12 17:33:39.604826 (ntainerd)[1530]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 17:33:39.617543 update_engine[1520]: I20251212 17:33:39.611019 1520 main.cc:92] Flatcar Update Engine starting Dec 12 17:33:39.621483 jq[1536]: true Dec 12 17:33:39.621625 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:33:39.622250 systemd-logind[1510]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:33:39.622996 systemd-logind[1510]: New seat seat0. Dec 12 17:33:39.627895 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 12 17:33:39.631694 dbus-daemon[1496]: [system] SELinux support is enabled Dec 12 17:33:39.631782 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:33:39.640726 update_engine[1520]: I20251212 17:33:39.638409 1520 update_check_scheduler.cc:74] Next update check in 2m37s Dec 12 17:33:39.635903 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:33:39.643372 extend-filesystems[1528]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 17:33:39.643372 extend-filesystems[1528]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 12 17:33:39.643372 extend-filesystems[1528]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 12 17:33:39.658401 extend-filesystems[1500]: Resized filesystem in /dev/vda9 Dec 12 17:33:39.645688 dbus-daemon[1496]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 12 17:33:39.644247 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:33:39.644275 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:33:39.649636 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:33:39.652523 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:33:39.652548 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:33:39.654033 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:33:39.654303 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:33:39.656824 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:33:39.664003 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:33:39.677691 bash[1564]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:33:39.685946 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:33:39.690696 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 17:33:39.710438 locksmithd[1562]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:33:39.725483 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:33:39.783734 containerd[1530]: time="2025-12-12T17:33:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:33:39.784475 containerd[1530]: time="2025-12-12T17:33:39.784420000Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 17:33:39.796176 containerd[1530]: time="2025-12-12T17:33:39.796114160Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10µs" Dec 12 17:33:39.796176 containerd[1530]: time="2025-12-12T17:33:39.796159080Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:33:39.796176 containerd[1530]: time="2025-12-12T17:33:39.796177960Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:33:39.796615 containerd[1530]: time="2025-12-12T17:33:39.796567120Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:33:39.796667 containerd[1530]: time="2025-12-12T17:33:39.796648920Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:33:39.796705 containerd[1530]: time="2025-12-12T17:33:39.796690640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:33:39.796774 containerd[1530]: time="2025-12-12T17:33:39.796757160Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:33:39.796846 containerd[1530]: time="2025-12-12T17:33:39.796775000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:33:39.797183 containerd[1530]: time="2025-12-12T17:33:39.797141600Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:33:39.797247 containerd[1530]: time="2025-12-12T17:33:39.797228200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:33:39.797268 containerd[1530]: time="2025-12-12T17:33:39.797251120Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:33:39.797268 containerd[1530]: time="2025-12-12T17:33:39.797261200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:33:39.797420 containerd[1530]: time="2025-12-12T17:33:39.797400000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:33:39.797765 containerd[1530]: time="2025-12-12T17:33:39.797742560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:33:39.797847 containerd[1530]: time="2025-12-12T17:33:39.797784560Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:33:39.797868 containerd[1530]: time="2025-12-12T17:33:39.797848240Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:33:39.797897 containerd[1530]: time="2025-12-12T17:33:39.797885520Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:33:39.798256 containerd[1530]: time="2025-12-12T17:33:39.798234720Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:33:39.798385 containerd[1530]: time="2025-12-12T17:33:39.798361600Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:33:39.805452 containerd[1530]: time="2025-12-12T17:33:39.805398480Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:33:39.805568 containerd[1530]: time="2025-12-12T17:33:39.805545040Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:33:39.805681 containerd[1530]: time="2025-12-12T17:33:39.805663480Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:33:39.805708 containerd[1530]: time="2025-12-12T17:33:39.805687280Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:33:39.806061 containerd[1530]: time="2025-12-12T17:33:39.806004960Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:33:39.806061 containerd[1530]: time="2025-12-12T17:33:39.806046240Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:33:39.806234 containerd[1530]: time="2025-12-12T17:33:39.806091600Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:33:39.806234 containerd[1530]: time="2025-12-12T17:33:39.806111800Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:33:39.806234 containerd[1530]: time="2025-12-12T17:33:39.806131600Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:33:39.806496 containerd[1530]: time="2025-12-12T17:33:39.806461000Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:33:39.806572 containerd[1530]: time="2025-12-12T17:33:39.806556120Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:33:39.806636 containerd[1530]: time="2025-12-12T17:33:39.806618360Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:33:39.806901 containerd[1530]: time="2025-12-12T17:33:39.806867760Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:33:39.806987 containerd[1530]: time="2025-12-12T17:33:39.806968120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:33:39.807046 containerd[1530]: time="2025-12-12T17:33:39.807032880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:33:39.807123 containerd[1530]: time="2025-12-12T17:33:39.807109280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:33:39.807184 containerd[1530]: time="2025-12-12T17:33:39.807171840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:33:39.807255 containerd[1530]: time="2025-12-12T17:33:39.807238400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:33:39.807326 containerd[1530]: time="2025-12-12T17:33:39.807312360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:33:39.807409 containerd[1530]: time="2025-12-12T17:33:39.807375520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:33:39.807488 containerd[1530]: time="2025-12-12T17:33:39.807473760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:33:39.807554 containerd[1530]: time="2025-12-12T17:33:39.807537720Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:33:39.807607 containerd[1530]: time="2025-12-12T17:33:39.807595480Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:33:39.807857 containerd[1530]: time="2025-12-12T17:33:39.807840920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:33:39.807922 containerd[1530]: time="2025-12-12T17:33:39.807908200Z" level=info msg="Start snapshots syncer" Dec 12 17:33:39.808010 containerd[1530]: time="2025-12-12T17:33:39.807995400Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:33:39.809061 containerd[1530]: time="2025-12-12T17:33:39.809015440Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:33:39.809188 containerd[1530]: time="2025-12-12T17:33:39.809081520Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:33:39.809188 containerd[1530]: time="2025-12-12T17:33:39.809142720Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:33:39.809481 containerd[1530]: time="2025-12-12T17:33:39.809434160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:33:39.809567 containerd[1530]: time="2025-12-12T17:33:39.809545960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:33:39.809594 containerd[1530]: time="2025-12-12T17:33:39.809571240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:33:39.809594 containerd[1530]: time="2025-12-12T17:33:39.809585040Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:33:39.809670 containerd[1530]: time="2025-12-12T17:33:39.809605960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:33:39.809691 containerd[1530]: time="2025-12-12T17:33:39.809677680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:33:39.809710 containerd[1530]: time="2025-12-12T17:33:39.809692080Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:33:39.809783 containerd[1530]: time="2025-12-12T17:33:39.809766720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:33:39.809809 containerd[1530]: time="2025-12-12T17:33:39.809789560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:33:39.809855 containerd[1530]: time="2025-12-12T17:33:39.809803840Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:33:39.809906 containerd[1530]: time="2025-12-12T17:33:39.809891280Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:33:39.810073 containerd[1530]: time="2025-12-12T17:33:39.810049360Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:33:39.810098 containerd[1530]: time="2025-12-12T17:33:39.810073240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:33:39.810098 containerd[1530]: time="2025-12-12T17:33:39.810085840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:33:39.810098 containerd[1530]: time="2025-12-12T17:33:39.810094840Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:33:39.810158 containerd[1530]: time="2025-12-12T17:33:39.810105840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:33:39.810188 containerd[1530]: time="2025-12-12T17:33:39.810171520Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:33:39.810433 containerd[1530]: time="2025-12-12T17:33:39.810397680Z" level=info msg="runtime interface created" Dec 12 17:33:39.810433 containerd[1530]: time="2025-12-12T17:33:39.810414360Z" level=info msg="created NRI interface" Dec 12 17:33:39.810433 containerd[1530]: time="2025-12-12T17:33:39.810425880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:33:39.810503 containerd[1530]: time="2025-12-12T17:33:39.810452360Z" level=info msg="Connect containerd service" Dec 12 17:33:39.810503 containerd[1530]: time="2025-12-12T17:33:39.810484680Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:33:39.811756 containerd[1530]: time="2025-12-12T17:33:39.811709720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:33:39.889669 containerd[1530]: time="2025-12-12T17:33:39.889518120Z" level=info msg="Start subscribing containerd event" Dec 12 17:33:39.889669 containerd[1530]: time="2025-12-12T17:33:39.889603680Z" level=info msg="Start recovering state" Dec 12 17:33:39.889791 containerd[1530]: time="2025-12-12T17:33:39.889775640Z" level=info msg="Start event monitor" Dec 12 17:33:39.889916 containerd[1530]: time="2025-12-12T17:33:39.889900240Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:33:39.889942 containerd[1530]: time="2025-12-12T17:33:39.889918480Z" level=info msg="Start streaming server" Dec 12 17:33:39.889942 containerd[1530]: time="2025-12-12T17:33:39.889928280Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:33:39.889942 containerd[1530]: time="2025-12-12T17:33:39.889935400Z" level=info msg="runtime interface starting up..." Dec 12 17:33:39.889942 containerd[1530]: time="2025-12-12T17:33:39.889941360Z" level=info msg="starting plugins..." Dec 12 17:33:39.890274 containerd[1530]: time="2025-12-12T17:33:39.889958600Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:33:39.890274 containerd[1530]: time="2025-12-12T17:33:39.890114640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:33:39.890312 containerd[1530]: time="2025-12-12T17:33:39.890262880Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:33:39.892502 containerd[1530]: time="2025-12-12T17:33:39.892019320Z" level=info msg="containerd successfully booted in 0.108915s" Dec 12 17:33:39.890463 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:33:39.938148 tar[1527]: linux-arm64/README.md Dec 12 17:33:39.961520 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:33:39.992196 sshd_keygen[1529]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:33:40.012627 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:33:40.015496 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:33:40.043487 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:33:40.044587 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:33:40.048647 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:33:40.080039 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:33:40.083924 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:33:40.086095 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:33:40.087427 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:33:41.320649 systemd-networkd[1439]: eth0: Gained IPv6LL Dec 12 17:33:41.323056 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:33:41.324877 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:33:41.328970 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 17:33:41.331656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:33:41.336671 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:33:41.372731 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 17:33:41.374514 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 17:33:41.376226 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:33:41.378296 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:33:41.944556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:33:41.946104 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:33:41.948688 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:33:41.950019 systemd[1]: Startup finished in 2.089s (kernel) + 5.196s (initrd) + 4.191s (userspace) = 11.477s. Dec 12 17:33:42.366230 kubelet[1639]: E1212 17:33:42.366077 1639 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:33:42.368556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:33:42.368701 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:33:42.369027 systemd[1]: kubelet.service: Consumed 762ms CPU time, 258.6M memory peak. Dec 12 17:33:45.543762 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:33:45.544769 systemd[1]: Started sshd@0-10.0.0.71:22-10.0.0.1:58164.service - OpenSSH per-connection server daemon (10.0.0.1:58164). Dec 12 17:33:45.631354 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 58164 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:33:45.633224 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:33:45.639523 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:33:45.640559 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:33:45.647504 systemd-logind[1510]: New session 1 of user core. Dec 12 17:33:45.664516 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:33:45.667670 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:33:45.686559 (systemd)[1657]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:33:45.688734 systemd-logind[1510]: New session c1 of user core. Dec 12 17:33:45.785480 systemd[1657]: Queued start job for default target default.target. Dec 12 17:33:45.802500 systemd[1657]: Created slice app.slice - User Application Slice. Dec 12 17:33:45.802527 systemd[1657]: Reached target paths.target - Paths. Dec 12 17:33:45.802566 systemd[1657]: Reached target timers.target - Timers. Dec 12 17:33:45.803781 systemd[1657]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:33:45.813180 systemd[1657]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:33:45.813245 systemd[1657]: Reached target sockets.target - Sockets. Dec 12 17:33:45.813281 systemd[1657]: Reached target basic.target - Basic System. Dec 12 17:33:45.813308 systemd[1657]: Reached target default.target - Main User Target. Dec 12 17:33:45.813335 systemd[1657]: Startup finished in 118ms. Dec 12 17:33:45.813541 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:33:45.815162 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:33:45.878863 systemd[1]: Started sshd@1-10.0.0.71:22-10.0.0.1:58172.service - OpenSSH per-connection server daemon (10.0.0.1:58172). Dec 12 17:33:45.943053 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 58172 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:33:45.944391 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:33:45.948297 systemd-logind[1510]: New session 2 of user core. Dec 12 17:33:45.957622 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:33:46.010151 sshd[1671]: Connection closed by 10.0.0.1 port 58172 Dec 12 17:33:46.009644 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Dec 12 17:33:46.022702 systemd[1]: sshd@1-10.0.0.71:22-10.0.0.1:58172.service: Deactivated successfully. Dec 12 17:33:46.024727 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:33:46.027667 systemd-logind[1510]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:33:46.029940 systemd[1]: Started sshd@2-10.0.0.71:22-10.0.0.1:58188.service - OpenSSH per-connection server daemon (10.0.0.1:58188). Dec 12 17:33:46.030576 systemd-logind[1510]: Removed session 2. Dec 12 17:33:46.087354 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 58188 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:33:46.088332 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:33:46.092259 systemd-logind[1510]: New session 3 of user core. Dec 12 17:33:46.098604 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:33:46.147019 sshd[1681]: Connection closed by 10.0.0.1 port 58188 Dec 12 17:33:46.147689 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Dec 12 17:33:46.162473 systemd[1]: sshd@2-10.0.0.71:22-10.0.0.1:58188.service: Deactivated successfully. Dec 12 17:33:46.167772 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:33:46.168699 systemd-logind[1510]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:33:46.172732 systemd[1]: Started sshd@3-10.0.0.71:22-10.0.0.1:58198.service - OpenSSH per-connection server daemon (10.0.0.1:58198). Dec 12 17:33:46.173401 systemd-logind[1510]: Removed session 3. Dec 12 17:33:46.227892 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 58198 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:33:46.228423 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:33:46.232327 systemd-logind[1510]: New session 4 of user core. Dec 12 17:33:46.239604 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:33:46.295919 sshd[1690]: Connection closed by 10.0.0.1 port 58198 Dec 12 17:33:46.296209 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Dec 12 17:33:46.309687 systemd[1]: sshd@3-10.0.0.71:22-10.0.0.1:58198.service: Deactivated successfully. Dec 12 17:33:46.313156 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:33:46.314523 systemd-logind[1510]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:33:46.316873 systemd[1]: Started sshd@4-10.0.0.71:22-10.0.0.1:58214.service - OpenSSH per-connection server daemon (10.0.0.1:58214). Dec 12 17:33:46.320587 systemd-logind[1510]: Removed session 4. Dec 12 17:33:46.380362 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 58214 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:33:46.381950 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:33:46.386832 systemd-logind[1510]: New session 5 of user core. Dec 12 17:33:46.405656 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:33:46.464412 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:33:46.464785 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:33:46.477833 sudo[1700]: pam_unix(sudo:session): session closed for user root Dec 12 17:33:46.479564 sshd[1699]: Connection closed by 10.0.0.1 port 58214 Dec 12 17:33:46.480363 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Dec 12 17:33:46.493229 systemd[1]: sshd@4-10.0.0.71:22-10.0.0.1:58214.service: Deactivated successfully. Dec 12 17:33:46.496089 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:33:46.498040 systemd-logind[1510]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:33:46.500089 systemd[1]: Started sshd@5-10.0.0.71:22-10.0.0.1:58226.service - OpenSSH per-connection server daemon (10.0.0.1:58226). Dec 12 17:33:46.501376 systemd-logind[1510]: Removed session 5. Dec 12 17:33:46.574212 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 58226 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:33:46.575589 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:33:46.579525 systemd-logind[1510]: New session 6 of user core. Dec 12 17:33:46.588624 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:33:46.639829 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:33:46.640090 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:33:46.645433 sudo[1711]: pam_unix(sudo:session): session closed for user root Dec 12 17:33:46.651787 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:33:46.652034 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:33:46.663116 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:33:46.702245 augenrules[1733]: No rules Dec 12 17:33:46.703740 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:33:46.704556 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:33:46.705996 sudo[1710]: pam_unix(sudo:session): session closed for user root Dec 12 17:33:46.707504 sshd[1709]: Connection closed by 10.0.0.1 port 58226 Dec 12 17:33:46.708368 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Dec 12 17:33:46.716560 systemd[1]: sshd@5-10.0.0.71:22-10.0.0.1:58226.service: Deactivated successfully. Dec 12 17:33:46.718388 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:33:46.719254 systemd-logind[1510]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:33:46.727951 systemd[1]: Started sshd@6-10.0.0.71:22-10.0.0.1:58238.service - OpenSSH per-connection server daemon (10.0.0.1:58238). Dec 12 17:33:46.728653 systemd-logind[1510]: Removed session 6. Dec 12 17:33:46.782412 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 58238 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:33:46.784648 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:33:46.790123 systemd-logind[1510]: New session 7 of user core. Dec 12 17:33:46.800612 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:33:46.851699 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:33:46.852309 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:33:47.148195 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:33:47.162788 (dockerd)[1769]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:33:47.366804 dockerd[1769]: time="2025-12-12T17:33:47.366733545Z" level=info msg="Starting up" Dec 12 17:33:47.367586 dockerd[1769]: time="2025-12-12T17:33:47.367565971Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:33:47.378797 dockerd[1769]: time="2025-12-12T17:33:47.378736764Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:33:47.483072 systemd[1]: var-lib-docker-metacopy\x2dcheck1168500914-merged.mount: Deactivated successfully. Dec 12 17:33:47.495885 dockerd[1769]: time="2025-12-12T17:33:47.495810579Z" level=info msg="Loading containers: start." Dec 12 17:33:47.504469 kernel: Initializing XFRM netlink socket Dec 12 17:33:47.720466 systemd-networkd[1439]: docker0: Link UP Dec 12 17:33:47.724852 dockerd[1769]: time="2025-12-12T17:33:47.724787267Z" level=info msg="Loading containers: done." Dec 12 17:33:47.738079 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3752423127-merged.mount: Deactivated successfully. Dec 12 17:33:47.743142 dockerd[1769]: time="2025-12-12T17:33:47.743067411Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:33:47.743273 dockerd[1769]: time="2025-12-12T17:33:47.743183826Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:33:47.743308 dockerd[1769]: time="2025-12-12T17:33:47.743278198Z" level=info msg="Initializing buildkit" Dec 12 17:33:47.769461 dockerd[1769]: time="2025-12-12T17:33:47.769409510Z" level=info msg="Completed buildkit initialization" Dec 12 17:33:47.776473 dockerd[1769]: time="2025-12-12T17:33:47.776409528Z" level=info msg="Daemon has completed initialization" Dec 12 17:33:47.776671 dockerd[1769]: time="2025-12-12T17:33:47.776557987Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:33:47.776842 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:33:48.308466 containerd[1530]: time="2025-12-12T17:33:48.308060482Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 12 17:33:48.852700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount281385100.mount: Deactivated successfully. Dec 12 17:33:49.893271 containerd[1530]: time="2025-12-12T17:33:49.893214568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:33:49.894324 containerd[1530]: time="2025-12-12T17:33:49.894091553Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387283" Dec 12 17:33:49.895616 containerd[1530]: time="2025-12-12T17:33:49.895584093Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:33:49.898628 containerd[1530]: time="2025-12-12T17:33:49.898599456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:33:49.900186 containerd[1530]: time="2025-12-12T17:33:49.900145522Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.592045155s" Dec 12 17:33:49.900246 containerd[1530]: time="2025-12-12T17:33:49.900193648Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 12 17:33:49.901395 containerd[1530]: time="2025-12-12T17:33:49.901368589Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 12 17:33:50.998756 containerd[1530]: time="2025-12-12T17:33:50.998689505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:33:51.003424 containerd[1530]: time="2025-12-12T17:33:51.003051288Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553083" Dec 12 17:33:51.004226 containerd[1530]: time="2025-12-12T17:33:51.004190937Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:33:51.007475 containerd[1530]: time="2025-12-12T17:33:51.007292247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:33:51.009335 containerd[1530]: time="2025-12-12T17:33:51.009298113Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.107892479s" Dec 12 17:33:51.009473 containerd[1530]: time="2025-12-12T17:33:51.009433889Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 12 17:33:51.009968 containerd[1530]: time="2025-12-12T17:33:51.009945027Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 12 17:33:52.074279 containerd[1530]: time="2025-12-12T17:33:52.074010248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:33:52.075091 containerd[1530]: time="2025-12-12T17:33:52.074935190Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298069" Dec 12 17:33:52.075769 containerd[1530]: time="2025-12-12T17:33:52.075738397Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:33:52.079300 containerd[1530]: time="2025-12-12T17:33:52.079266744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:33:52.081103 containerd[1530]: time="2025-12-12T17:33:52.080997013Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.070930173s" Dec 12 17:33:52.081103 containerd[1530]: time="2025-12-12T17:33:52.081048178Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 12 17:33:52.081609 containerd[1530]: time="2025-12-12T17:33:52.081568875Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 12 17:33:52.425227 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:33:52.426595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:33:52.578485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:33:52.582690 (kubelet)[2060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:33:52.618648 kubelet[2060]: E1212 17:33:52.618578 2060 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:33:52.622024 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:33:52.622169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:33:52.623530 systemd[1]: kubelet.service: Consumed 153ms CPU time, 107.9M memory peak. Dec 12 17:33:53.162036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1904591831.mount: Deactivated successfully. Dec 12 17:33:53.529464 containerd[1530]: time="2025-12-12T17:33:53.529355424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:33:53.530973 containerd[1530]: time="2025-12-12T17:33:53.530799457Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258675" Dec 12 17:33:53.531740 containerd[1530]: time="2025-12-12T17:33:53.531712033Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:33:53.534020 containerd[1530]: time="2025-12-12T17:33:53.533787373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:33:53.534353 containerd[1530]: time="2025-12-12T17:33:53.534325510Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.452727512s" Dec 12 17:33:53.534353 containerd[1530]: time="2025-12-12T17:33:53.534352193Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 12 17:33:53.534795 containerd[1530]: time="2025-12-12T17:33:53.534764157Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 12 17:33:54.054391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount40048084.mount: Deactivated successfully. Dec 12 17:33:54.723961 containerd[1530]: time="2025-12-12T17:33:54.723907869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:33:54.724741 containerd[1530]: time="2025-12-12T17:33:54.724336273Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Dec 12 17:33:54.725485 containerd[1530]: time="2025-12-12T17:33:54.725456668Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:33:54.729051 containerd[1530]: time="2025-12-12T17:33:54.729019514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:33:54.729990 containerd[1530]: time="2025-12-12T17:33:54.729954170Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.195053718s" Dec 12 17:33:54.729990 containerd[1530]: time="2025-12-12T17:33:54.729989293Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 12 17:33:54.730498 containerd[1530]: time="2025-12-12T17:33:54.730463502Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 17:33:55.139791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2337243790.mount: Deactivated successfully. Dec 12 17:33:55.145248 containerd[1530]: time="2025-12-12T17:33:55.145188753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:33:55.145714 containerd[1530]: time="2025-12-12T17:33:55.145680282Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 12 17:33:55.146710 containerd[1530]: time="2025-12-12T17:33:55.146681341Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:33:55.148666 containerd[1530]: time="2025-12-12T17:33:55.148607973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:33:55.149549 containerd[1530]: time="2025-12-12T17:33:55.149434815Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 418.941671ms" Dec 12 17:33:55.149549 containerd[1530]: time="2025-12-12T17:33:55.149482300Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 12 17:33:55.150055 containerd[1530]: time="2025-12-12T17:33:55.150003312Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 12 17:33:55.724934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1005051237.mount: Deactivated successfully. Dec 12 17:33:57.318845 containerd[1530]: time="2025-12-12T17:33:57.318791228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:33:57.319981 containerd[1530]: time="2025-12-12T17:33:57.319940536Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013653" Dec 12 17:33:57.321039 containerd[1530]: time="2025-12-12T17:33:57.321001955Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:33:57.325179 containerd[1530]: time="2025-12-12T17:33:57.325132140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:33:57.325809 containerd[1530]: time="2025-12-12T17:33:57.325768760Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.175703242s" Dec 12 17:33:57.325871 containerd[1530]: time="2025-12-12T17:33:57.325808404Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 12 17:34:02.259290 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:34:02.259427 systemd[1]: kubelet.service: Consumed 153ms CPU time, 107.9M memory peak. Dec 12 17:34:02.261397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:34:02.281611 systemd[1]: Reload requested from client PID 2219 ('systemctl') (unit session-7.scope)... Dec 12 17:34:02.281624 systemd[1]: Reloading... Dec 12 17:34:02.370554 zram_generator::config[2262]: No configuration found. Dec 12 17:34:02.521904 systemd[1]: Reloading finished in 239 ms. Dec 12 17:34:02.584469 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 17:34:02.584536 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 17:34:02.584916 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:34:02.584965 systemd[1]: kubelet.service: Consumed 95ms CPU time, 95.2M memory peak. Dec 12 17:34:02.587559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:34:02.732195 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:34:02.736315 (kubelet)[2307]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:34:02.773174 kubelet[2307]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:34:02.773174 kubelet[2307]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:34:02.773174 kubelet[2307]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:34:02.774437 kubelet[2307]: I1212 17:34:02.774369 2307 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:34:03.354058 kubelet[2307]: I1212 17:34:03.354003 2307 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 17:34:03.354058 kubelet[2307]: I1212 17:34:03.354037 2307 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:34:03.354307 kubelet[2307]: I1212 17:34:03.354274 2307 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:34:03.377517 kubelet[2307]: E1212 17:34:03.377478 2307 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 17:34:03.379629 kubelet[2307]: I1212 17:34:03.379591 2307 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:34:03.389027 kubelet[2307]: I1212 17:34:03.389004 2307 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:34:03.391666 kubelet[2307]: I1212 17:34:03.391648 2307 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:34:03.391995 kubelet[2307]: I1212 17:34:03.391966 2307 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:34:03.392147 kubelet[2307]: I1212 17:34:03.391998 2307 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:34:03.392231 kubelet[2307]: I1212 17:34:03.392212 2307 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:34:03.392231 kubelet[2307]: I1212 17:34:03.392221 2307 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 17:34:03.392414 kubelet[2307]: I1212 17:34:03.392399 2307 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:34:03.395484 kubelet[2307]: I1212 17:34:03.395461 2307 kubelet.go:480] "Attempting to sync node with API server" Dec 12 17:34:03.395539 kubelet[2307]: I1212 17:34:03.395489 2307 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:34:03.395539 kubelet[2307]: I1212 17:34:03.395517 2307 kubelet.go:386] "Adding apiserver pod source" Dec 12 17:34:03.396879 kubelet[2307]: I1212 17:34:03.396504 2307 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:34:03.397873 kubelet[2307]: I1212 17:34:03.397851 2307 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:34:03.399285 kubelet[2307]: I1212 17:34:03.399081 2307 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:34:03.399285 kubelet[2307]: W1212 17:34:03.399203 2307 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:34:03.400341 kubelet[2307]: E1212 17:34:03.400310 2307 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:34:03.401390 kubelet[2307]: E1212 17:34:03.401355 2307 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:34:03.405824 kubelet[2307]: I1212 17:34:03.404708 2307 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:34:03.405824 kubelet[2307]: I1212 17:34:03.404747 2307 server.go:1289] "Started kubelet" Dec 12 17:34:03.405824 kubelet[2307]: I1212 17:34:03.405384 2307 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:34:03.406679 kubelet[2307]: I1212 17:34:03.406457 2307 server.go:317] "Adding debug handlers to kubelet server" Dec 12 17:34:03.407579 kubelet[2307]: I1212 17:34:03.407134 2307 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:34:03.407579 kubelet[2307]: I1212 17:34:03.407491 2307 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:34:03.408457 kubelet[2307]: E1212 17:34:03.407053 2307 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.71:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.71:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1880883b947a3b0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 17:34:03.40472091 +0000 UTC m=+0.665191670,LastTimestamp:2025-12-12 17:34:03.40472091 +0000 UTC m=+0.665191670,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 17:34:03.408457 kubelet[2307]: I1212 17:34:03.408457 2307 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:34:03.408573 kubelet[2307]: I1212 17:34:03.408538 2307 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:34:03.410503 kubelet[2307]: E1212 17:34:03.409771 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:34:03.410503 kubelet[2307]: I1212 17:34:03.409814 2307 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:34:03.410503 kubelet[2307]: I1212 17:34:03.410028 2307 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:34:03.410770 kubelet[2307]: I1212 17:34:03.410753 2307 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:34:03.411113 kubelet[2307]: E1212 17:34:03.411070 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="200ms" Dec 12 17:34:03.411306 kubelet[2307]: E1212 17:34:03.411250 2307 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:34:03.411699 kubelet[2307]: I1212 17:34:03.411672 2307 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:34:03.411921 kubelet[2307]: I1212 17:34:03.411748 2307 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:34:03.413251 kubelet[2307]: E1212 17:34:03.413194 2307 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:34:03.414115 kubelet[2307]: I1212 17:34:03.414056 2307 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:34:03.427028 kubelet[2307]: I1212 17:34:03.426094 2307 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 17:34:03.427820 kubelet[2307]: I1212 17:34:03.427224 2307 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 17:34:03.427820 kubelet[2307]: I1212 17:34:03.427249 2307 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 17:34:03.427820 kubelet[2307]: I1212 17:34:03.427267 2307 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:34:03.427820 kubelet[2307]: I1212 17:34:03.427274 2307 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 17:34:03.427820 kubelet[2307]: E1212 17:34:03.427312 2307 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:34:03.428747 kubelet[2307]: I1212 17:34:03.428719 2307 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:34:03.428747 kubelet[2307]: I1212 17:34:03.428738 2307 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:34:03.428848 kubelet[2307]: I1212 17:34:03.428757 2307 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:34:03.429254 kubelet[2307]: E1212 17:34:03.429195 2307 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:34:03.510936 kubelet[2307]: E1212 17:34:03.510876 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:34:03.528161 kubelet[2307]: E1212 17:34:03.528103 2307 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 17:34:03.550156 kubelet[2307]: I1212 17:34:03.549739 2307 policy_none.go:49] "None policy: Start" Dec 12 17:34:03.550156 kubelet[2307]: I1212 17:34:03.549785 2307 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:34:03.550156 kubelet[2307]: I1212 17:34:03.549800 2307 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:34:03.556033 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:34:03.572491 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:34:03.575771 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:34:03.590353 kubelet[2307]: E1212 17:34:03.590315 2307 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:34:03.590578 kubelet[2307]: I1212 17:34:03.590553 2307 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:34:03.590613 kubelet[2307]: I1212 17:34:03.590574 2307 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:34:03.590816 kubelet[2307]: I1212 17:34:03.590777 2307 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:34:03.592237 kubelet[2307]: E1212 17:34:03.591915 2307 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:34:03.592237 kubelet[2307]: E1212 17:34:03.591950 2307 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 12 17:34:03.613395 kubelet[2307]: E1212 17:34:03.612475 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="400ms" Dec 12 17:34:03.693662 kubelet[2307]: I1212 17:34:03.692594 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:34:03.693662 kubelet[2307]: E1212 17:34:03.692974 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Dec 12 17:34:03.738813 systemd[1]: Created slice kubepods-burstable-pod80ee52156c3917af189e702caa158f0b.slice - libcontainer container kubepods-burstable-pod80ee52156c3917af189e702caa158f0b.slice. Dec 12 17:34:03.754274 kubelet[2307]: E1212 17:34:03.754236 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:34:03.758010 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 12 17:34:03.768564 kubelet[2307]: E1212 17:34:03.768533 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:34:03.771522 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 12 17:34:03.773388 kubelet[2307]: E1212 17:34:03.773349 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:34:03.811966 kubelet[2307]: I1212 17:34:03.811591 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80ee52156c3917af189e702caa158f0b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"80ee52156c3917af189e702caa158f0b\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:03.811966 kubelet[2307]: I1212 17:34:03.811790 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80ee52156c3917af189e702caa158f0b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"80ee52156c3917af189e702caa158f0b\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:03.811966 kubelet[2307]: I1212 17:34:03.811878 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80ee52156c3917af189e702caa158f0b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"80ee52156c3917af189e702caa158f0b\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:03.811966 kubelet[2307]: I1212 17:34:03.811948 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:03.812372 kubelet[2307]: I1212 17:34:03.812195 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:03.812372 kubelet[2307]: I1212 17:34:03.812358 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:03.812574 kubelet[2307]: I1212 17:34:03.812507 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:03.812574 kubelet[2307]: I1212 17:34:03.812561 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:03.812750 kubelet[2307]: I1212 17:34:03.812677 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:34:03.894968 kubelet[2307]: I1212 17:34:03.894590 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:34:03.894968 kubelet[2307]: E1212 17:34:03.894924 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Dec 12 17:34:04.013304 kubelet[2307]: E1212 17:34:04.013195 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="800ms" Dec 12 17:34:04.056327 containerd[1530]: time="2025-12-12T17:34:04.056203816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:80ee52156c3917af189e702caa158f0b,Namespace:kube-system,Attempt:0,}" Dec 12 17:34:04.070962 containerd[1530]: time="2025-12-12T17:34:04.070841191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 12 17:34:04.076646 containerd[1530]: time="2025-12-12T17:34:04.076571339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 12 17:34:04.084833 containerd[1530]: time="2025-12-12T17:34:04.084781513Z" level=info msg="connecting to shim 9fda895875896859b7bc5be2b92c307622962bafb6f607eb2b20810562a04457" address="unix:///run/containerd/s/53ef94224541a075b122017f2949cb674cbb07da13471f332da6211021043799" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:34:04.120063 containerd[1530]: time="2025-12-12T17:34:04.120010907Z" level=info msg="connecting to shim 40fdf1b7dc71554ffb6ee627144860607436014f886badc70122f3177938487b" address="unix:///run/containerd/s/541f3185f600a26cfb20f849e1f1518449b089ed925f5ba5055ce781f537b8ad" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:34:04.123522 containerd[1530]: time="2025-12-12T17:34:04.123481446Z" level=info msg="connecting to shim 31edd2917b9b0fba00aca0e4f8258e639b6ebf980ba53f19269d4cbee456a6fc" address="unix:///run/containerd/s/6e3d120ea4dd432fca47e575212a5176c1604e3db448d4d609bbed1ada9f30fe" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:34:04.124638 systemd[1]: Started cri-containerd-9fda895875896859b7bc5be2b92c307622962bafb6f607eb2b20810562a04457.scope - libcontainer container 9fda895875896859b7bc5be2b92c307622962bafb6f607eb2b20810562a04457. Dec 12 17:34:04.152679 systemd[1]: Started cri-containerd-40fdf1b7dc71554ffb6ee627144860607436014f886badc70122f3177938487b.scope - libcontainer container 40fdf1b7dc71554ffb6ee627144860607436014f886badc70122f3177938487b. Dec 12 17:34:04.156177 systemd[1]: Started cri-containerd-31edd2917b9b0fba00aca0e4f8258e639b6ebf980ba53f19269d4cbee456a6fc.scope - libcontainer container 31edd2917b9b0fba00aca0e4f8258e639b6ebf980ba53f19269d4cbee456a6fc. Dec 12 17:34:04.179589 containerd[1530]: time="2025-12-12T17:34:04.179549358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:80ee52156c3917af189e702caa158f0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fda895875896859b7bc5be2b92c307622962bafb6f607eb2b20810562a04457\"" Dec 12 17:34:04.189485 containerd[1530]: time="2025-12-12T17:34:04.189401295Z" level=info msg="CreateContainer within sandbox \"9fda895875896859b7bc5be2b92c307622962bafb6f607eb2b20810562a04457\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:34:04.202237 containerd[1530]: time="2025-12-12T17:34:04.202197051Z" level=info msg="Container 3d7ad4a65eeeca67e72574d8547d990b2c75ad5c6dfc57b277438ae05b13455c: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:34:04.203604 containerd[1530]: time="2025-12-12T17:34:04.203567994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"40fdf1b7dc71554ffb6ee627144860607436014f886badc70122f3177938487b\"" Dec 12 17:34:04.207832 containerd[1530]: time="2025-12-12T17:34:04.207808671Z" level=info msg="CreateContainer within sandbox \"40fdf1b7dc71554ffb6ee627144860607436014f886badc70122f3177938487b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:34:04.210848 containerd[1530]: time="2025-12-12T17:34:04.210816496Z" level=info msg="CreateContainer within sandbox \"9fda895875896859b7bc5be2b92c307622962bafb6f607eb2b20810562a04457\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3d7ad4a65eeeca67e72574d8547d990b2c75ad5c6dfc57b277438ae05b13455c\"" Dec 12 17:34:04.211547 containerd[1530]: time="2025-12-12T17:34:04.211519188Z" level=info msg="StartContainer for \"3d7ad4a65eeeca67e72574d8547d990b2c75ad5c6dfc57b277438ae05b13455c\"" Dec 12 17:34:04.213694 containerd[1530]: time="2025-12-12T17:34:04.213586543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"31edd2917b9b0fba00aca0e4f8258e639b6ebf980ba53f19269d4cbee456a6fc\"" Dec 12 17:34:04.214519 containerd[1530]: time="2025-12-12T17:34:04.214485930Z" level=info msg="connecting to shim 3d7ad4a65eeeca67e72574d8547d990b2c75ad5c6dfc57b277438ae05b13455c" address="unix:///run/containerd/s/53ef94224541a075b122017f2949cb674cbb07da13471f332da6211021043799" protocol=ttrpc version=3 Dec 12 17:34:04.218621 containerd[1530]: time="2025-12-12T17:34:04.218512591Z" level=info msg="CreateContainer within sandbox \"31edd2917b9b0fba00aca0e4f8258e639b6ebf980ba53f19269d4cbee456a6fc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:34:04.221448 containerd[1530]: time="2025-12-12T17:34:04.221406048Z" level=info msg="Container b298e9658e12c10f9c760cf86e1d39207aa302357697b90a3d59153e43711f79: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:34:04.231171 containerd[1530]: time="2025-12-12T17:34:04.231129335Z" level=info msg="CreateContainer within sandbox \"40fdf1b7dc71554ffb6ee627144860607436014f886badc70122f3177938487b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b298e9658e12c10f9c760cf86e1d39207aa302357697b90a3d59153e43711f79\"" Dec 12 17:34:04.231554 containerd[1530]: time="2025-12-12T17:34:04.231529204Z" level=info msg="StartContainer for \"b298e9658e12c10f9c760cf86e1d39207aa302357697b90a3d59153e43711f79\"" Dec 12 17:34:04.232596 containerd[1530]: time="2025-12-12T17:34:04.232572442Z" level=info msg="Container a72a41a91e00d8e7ee87382399728d5c7ad99e950c02cbb1ff92379f2bc3eb50: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:34:04.232889 containerd[1530]: time="2025-12-12T17:34:04.232831062Z" level=info msg="connecting to shim b298e9658e12c10f9c760cf86e1d39207aa302357697b90a3d59153e43711f79" address="unix:///run/containerd/s/541f3185f600a26cfb20f849e1f1518449b089ed925f5ba5055ce781f537b8ad" protocol=ttrpc version=3 Dec 12 17:34:04.234781 systemd[1]: Started cri-containerd-3d7ad4a65eeeca67e72574d8547d990b2c75ad5c6dfc57b277438ae05b13455c.scope - libcontainer container 3d7ad4a65eeeca67e72574d8547d990b2c75ad5c6dfc57b277438ae05b13455c. Dec 12 17:34:04.242249 containerd[1530]: time="2025-12-12T17:34:04.242206803Z" level=info msg="CreateContainer within sandbox \"31edd2917b9b0fba00aca0e4f8258e639b6ebf980ba53f19269d4cbee456a6fc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a72a41a91e00d8e7ee87382399728d5c7ad99e950c02cbb1ff92379f2bc3eb50\"" Dec 12 17:34:04.242718 containerd[1530]: time="2025-12-12T17:34:04.242684038Z" level=info msg="StartContainer for \"a72a41a91e00d8e7ee87382399728d5c7ad99e950c02cbb1ff92379f2bc3eb50\"" Dec 12 17:34:04.244123 containerd[1530]: time="2025-12-12T17:34:04.243765919Z" level=info msg="connecting to shim a72a41a91e00d8e7ee87382399728d5c7ad99e950c02cbb1ff92379f2bc3eb50" address="unix:///run/containerd/s/6e3d120ea4dd432fca47e575212a5176c1604e3db448d4d609bbed1ada9f30fe" protocol=ttrpc version=3 Dec 12 17:34:04.254609 systemd[1]: Started cri-containerd-b298e9658e12c10f9c760cf86e1d39207aa302357697b90a3d59153e43711f79.scope - libcontainer container b298e9658e12c10f9c760cf86e1d39207aa302357697b90a3d59153e43711f79. Dec 12 17:34:04.271613 systemd[1]: Started cri-containerd-a72a41a91e00d8e7ee87382399728d5c7ad99e950c02cbb1ff92379f2bc3eb50.scope - libcontainer container a72a41a91e00d8e7ee87382399728d5c7ad99e950c02cbb1ff92379f2bc3eb50. Dec 12 17:34:04.285049 containerd[1530]: time="2025-12-12T17:34:04.284847231Z" level=info msg="StartContainer for \"3d7ad4a65eeeca67e72574d8547d990b2c75ad5c6dfc57b277438ae05b13455c\" returns successfully" Dec 12 17:34:04.297607 kubelet[2307]: I1212 17:34:04.297552 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:34:04.298180 kubelet[2307]: E1212 17:34:04.297988 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Dec 12 17:34:04.307075 containerd[1530]: time="2025-12-12T17:34:04.306397162Z" level=info msg="StartContainer for \"b298e9658e12c10f9c760cf86e1d39207aa302357697b90a3d59153e43711f79\" returns successfully" Dec 12 17:34:04.323869 containerd[1530]: time="2025-12-12T17:34:04.323822305Z" level=info msg="StartContainer for \"a72a41a91e00d8e7ee87382399728d5c7ad99e950c02cbb1ff92379f2bc3eb50\" returns successfully" Dec 12 17:34:04.434342 kubelet[2307]: E1212 17:34:04.434305 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:34:04.435687 kubelet[2307]: E1212 17:34:04.435668 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:34:04.437024 kubelet[2307]: E1212 17:34:04.437000 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:34:05.099380 kubelet[2307]: I1212 17:34:05.099349 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:34:05.440234 kubelet[2307]: E1212 17:34:05.440006 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:34:05.440234 kubelet[2307]: E1212 17:34:05.440104 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:34:05.791757 kubelet[2307]: E1212 17:34:05.791300 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:34:06.419317 kubelet[2307]: E1212 17:34:06.418054 2307 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 12 17:34:06.441860 kubelet[2307]: E1212 17:34:06.441825 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:34:06.509620 kubelet[2307]: I1212 17:34:06.509572 2307 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:34:06.511840 kubelet[2307]: I1212 17:34:06.510926 2307 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:06.527469 kubelet[2307]: E1212 17:34:06.527233 2307 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:06.527644 kubelet[2307]: I1212 17:34:06.527619 2307 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:06.530231 kubelet[2307]: E1212 17:34:06.530185 2307 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:06.530231 kubelet[2307]: I1212 17:34:06.530213 2307 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:34:06.532691 kubelet[2307]: E1212 17:34:06.532641 2307 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 17:34:07.408741 kubelet[2307]: I1212 17:34:07.408652 2307 apiserver.go:52] "Watching apiserver" Dec 12 17:34:07.510763 kubelet[2307]: I1212 17:34:07.510718 2307 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:34:08.678759 systemd[1]: Reload requested from client PID 2594 ('systemctl') (unit session-7.scope)... Dec 12 17:34:08.679241 systemd[1]: Reloading... Dec 12 17:34:08.752480 zram_generator::config[2640]: No configuration found. Dec 12 17:34:09.018814 systemd[1]: Reloading finished in 339 ms. Dec 12 17:34:09.051216 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:34:09.069296 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:34:09.069566 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:34:09.069622 systemd[1]: kubelet.service: Consumed 1.065s CPU time, 127.7M memory peak. Dec 12 17:34:09.071389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:34:09.234799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:34:09.244792 (kubelet)[2679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:34:09.288468 kubelet[2679]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:34:09.288468 kubelet[2679]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:34:09.288468 kubelet[2679]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:34:09.288468 kubelet[2679]: I1212 17:34:09.288529 2679 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:34:09.296684 kubelet[2679]: I1212 17:34:09.296647 2679 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 17:34:09.299474 kubelet[2679]: I1212 17:34:09.296992 2679 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:34:09.299474 kubelet[2679]: I1212 17:34:09.297241 2679 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:34:09.299474 kubelet[2679]: I1212 17:34:09.298567 2679 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 17:34:09.301062 kubelet[2679]: I1212 17:34:09.301020 2679 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:34:09.304945 kubelet[2679]: I1212 17:34:09.304923 2679 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:34:09.307539 kubelet[2679]: I1212 17:34:09.307521 2679 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:34:09.307733 kubelet[2679]: I1212 17:34:09.307712 2679 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:34:09.307885 kubelet[2679]: I1212 17:34:09.307734 2679 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:34:09.307970 kubelet[2679]: I1212 17:34:09.307896 2679 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:34:09.307970 kubelet[2679]: I1212 17:34:09.307905 2679 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 17:34:09.307970 kubelet[2679]: I1212 17:34:09.307949 2679 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:34:09.308102 kubelet[2679]: I1212 17:34:09.308089 2679 kubelet.go:480] "Attempting to sync node with API server" Dec 12 17:34:09.308102 kubelet[2679]: I1212 17:34:09.308103 2679 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:34:09.308150 kubelet[2679]: I1212 17:34:09.308124 2679 kubelet.go:386] "Adding apiserver pod source" Dec 12 17:34:09.308150 kubelet[2679]: I1212 17:34:09.308145 2679 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:34:09.313826 kubelet[2679]: I1212 17:34:09.313790 2679 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:34:09.314737 kubelet[2679]: I1212 17:34:09.314668 2679 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:34:09.319131 kubelet[2679]: I1212 17:34:09.319067 2679 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:34:09.319131 kubelet[2679]: I1212 17:34:09.319118 2679 server.go:1289] "Started kubelet" Dec 12 17:34:09.319912 kubelet[2679]: I1212 17:34:09.319865 2679 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:34:09.320489 kubelet[2679]: I1212 17:34:09.320127 2679 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:34:09.320489 kubelet[2679]: I1212 17:34:09.320178 2679 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:34:09.320600 kubelet[2679]: I1212 17:34:09.320579 2679 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:34:09.323069 kubelet[2679]: I1212 17:34:09.323045 2679 server.go:317] "Adding debug handlers to kubelet server" Dec 12 17:34:09.329671 kubelet[2679]: I1212 17:34:09.329635 2679 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:34:09.332345 kubelet[2679]: I1212 17:34:09.332307 2679 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:34:09.332532 kubelet[2679]: I1212 17:34:09.332422 2679 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:34:09.333026 kubelet[2679]: E1212 17:34:09.333000 2679 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:34:09.333207 kubelet[2679]: I1212 17:34:09.333162 2679 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:34:09.333525 kubelet[2679]: I1212 17:34:09.333479 2679 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:34:09.338801 kubelet[2679]: I1212 17:34:09.338772 2679 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:34:09.340639 kubelet[2679]: I1212 17:34:09.340599 2679 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:34:09.361606 kubelet[2679]: I1212 17:34:09.361540 2679 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 17:34:09.368676 kubelet[2679]: I1212 17:34:09.368498 2679 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 17:34:09.368676 kubelet[2679]: I1212 17:34:09.368536 2679 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 17:34:09.368676 kubelet[2679]: I1212 17:34:09.368573 2679 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:34:09.368676 kubelet[2679]: I1212 17:34:09.368580 2679 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 17:34:09.368676 kubelet[2679]: E1212 17:34:09.368635 2679 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:34:09.386951 kubelet[2679]: I1212 17:34:09.386923 2679 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:34:09.386951 kubelet[2679]: I1212 17:34:09.386943 2679 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:34:09.387100 kubelet[2679]: I1212 17:34:09.386965 2679 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:34:09.387100 kubelet[2679]: I1212 17:34:09.387087 2679 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:34:09.387140 kubelet[2679]: I1212 17:34:09.387096 2679 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:34:09.387140 kubelet[2679]: I1212 17:34:09.387112 2679 policy_none.go:49] "None policy: Start" Dec 12 17:34:09.387140 kubelet[2679]: I1212 17:34:09.387121 2679 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:34:09.387140 kubelet[2679]: I1212 17:34:09.387131 2679 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:34:09.387244 kubelet[2679]: I1212 17:34:09.387222 2679 state_mem.go:75] "Updated machine memory state" Dec 12 17:34:09.391121 kubelet[2679]: E1212 17:34:09.391067 2679 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:34:09.391297 kubelet[2679]: I1212 17:34:09.391282 2679 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:34:09.391328 kubelet[2679]: I1212 17:34:09.391296 2679 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:34:09.391475 kubelet[2679]: I1212 17:34:09.391459 2679 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:34:09.392994 kubelet[2679]: E1212 17:34:09.392945 2679 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:34:09.469694 kubelet[2679]: I1212 17:34:09.469607 2679 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:09.469845 kubelet[2679]: I1212 17:34:09.469749 2679 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:09.470088 kubelet[2679]: I1212 17:34:09.469618 2679 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:34:09.494223 kubelet[2679]: I1212 17:34:09.494184 2679 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:34:09.503789 kubelet[2679]: I1212 17:34:09.502735 2679 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 12 17:34:09.503789 kubelet[2679]: I1212 17:34:09.502824 2679 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:34:09.541292 kubelet[2679]: I1212 17:34:09.541159 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:09.541292 kubelet[2679]: I1212 17:34:09.541224 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:09.541292 kubelet[2679]: I1212 17:34:09.541251 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:09.541292 kubelet[2679]: I1212 17:34:09.541269 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:34:09.541292 kubelet[2679]: I1212 17:34:09.541295 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80ee52156c3917af189e702caa158f0b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"80ee52156c3917af189e702caa158f0b\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:09.541513 kubelet[2679]: I1212 17:34:09.541313 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80ee52156c3917af189e702caa158f0b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"80ee52156c3917af189e702caa158f0b\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:09.541513 kubelet[2679]: I1212 17:34:09.541330 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:09.541513 kubelet[2679]: I1212 17:34:09.541345 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:09.541513 kubelet[2679]: I1212 17:34:09.541357 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80ee52156c3917af189e702caa158f0b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"80ee52156c3917af189e702caa158f0b\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:09.678012 sudo[2722]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 12 17:34:09.678368 sudo[2722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 12 17:34:10.035411 sudo[2722]: pam_unix(sudo:session): session closed for user root Dec 12 17:34:10.311370 kubelet[2679]: I1212 17:34:10.310980 2679 apiserver.go:52] "Watching apiserver" Dec 12 17:34:10.332947 kubelet[2679]: I1212 17:34:10.332892 2679 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:34:10.386912 kubelet[2679]: I1212 17:34:10.386719 2679 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:10.392678 kubelet[2679]: E1212 17:34:10.392552 2679 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:10.419988 kubelet[2679]: I1212 17:34:10.419919 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.419901831 podStartE2EDuration="1.419901831s" podCreationTimestamp="2025-12-12 17:34:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:34:10.41147291 +0000 UTC m=+1.162632307" watchObservedRunningTime="2025-12-12 17:34:10.419901831 +0000 UTC m=+1.171061228" Dec 12 17:34:10.420155 kubelet[2679]: I1212 17:34:10.420052 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.42004848 podStartE2EDuration="1.42004848s" podCreationTimestamp="2025-12-12 17:34:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:34:10.420009158 +0000 UTC m=+1.171168555" watchObservedRunningTime="2025-12-12 17:34:10.42004848 +0000 UTC m=+1.171207877" Dec 12 17:34:10.441671 kubelet[2679]: I1212 17:34:10.441616 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.441579531 podStartE2EDuration="1.441579531s" podCreationTimestamp="2025-12-12 17:34:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:34:10.429740479 +0000 UTC m=+1.180899876" watchObservedRunningTime="2025-12-12 17:34:10.441579531 +0000 UTC m=+1.192738928" Dec 12 17:34:11.658213 sudo[1748]: pam_unix(sudo:session): session closed for user root Dec 12 17:34:11.659573 sshd[1747]: Connection closed by 10.0.0.1 port 58238 Dec 12 17:34:11.661508 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Dec 12 17:34:11.665778 systemd-logind[1510]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:34:11.666058 systemd[1]: sshd@6-10.0.0.71:22-10.0.0.1:58238.service: Deactivated successfully. Dec 12 17:34:11.668080 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:34:11.669641 systemd[1]: session-7.scope: Consumed 6.772s CPU time, 255.3M memory peak. Dec 12 17:34:11.671914 systemd-logind[1510]: Removed session 7. Dec 12 17:34:14.848511 kubelet[2679]: I1212 17:34:14.848435 2679 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:34:14.848935 containerd[1530]: time="2025-12-12T17:34:14.848821770Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:34:14.849381 kubelet[2679]: I1212 17:34:14.849314 2679 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:34:15.379319 systemd[1]: Created slice kubepods-besteffort-pod1d6f458e_a5a8_4b88_98f1_e0f9460b56a0.slice - libcontainer container kubepods-besteffort-pod1d6f458e_a5a8_4b88_98f1_e0f9460b56a0.slice. Dec 12 17:34:15.480195 kubelet[2679]: I1212 17:34:15.480146 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cktzk\" (UniqueName: \"kubernetes.io/projected/1d6f458e-a5a8-4b88-98f1-e0f9460b56a0-kube-api-access-cktzk\") pod \"cilium-operator-6c4d7847fc-44h5m\" (UID: \"1d6f458e-a5a8-4b88-98f1-e0f9460b56a0\") " pod="kube-system/cilium-operator-6c4d7847fc-44h5m" Dec 12 17:34:15.480195 kubelet[2679]: I1212 17:34:15.480192 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d6f458e-a5a8-4b88-98f1-e0f9460b56a0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-44h5m\" (UID: \"1d6f458e-a5a8-4b88-98f1-e0f9460b56a0\") " pod="kube-system/cilium-operator-6c4d7847fc-44h5m" Dec 12 17:34:15.629030 systemd[1]: Created slice kubepods-besteffort-podaff1acea_862a_4a9e_9597_0629f548d1ed.slice - libcontainer container kubepods-besteffort-podaff1acea_862a_4a9e_9597_0629f548d1ed.slice. Dec 12 17:34:15.639414 systemd[1]: Created slice kubepods-burstable-pod03a8a009_c38d_4c0d_b25c_98d1cef4c24b.slice - libcontainer container kubepods-burstable-pod03a8a009_c38d_4c0d_b25c_98d1cef4c24b.slice. Dec 12 17:34:15.680855 kubelet[2679]: I1212 17:34:15.680807 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-hostproc\") pod \"cilium-thkdg\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " pod="kube-system/cilium-thkdg" Dec 12 17:34:15.680855 kubelet[2679]: I1212 17:34:15.680846 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-cilium-cgroup\") pod \"cilium-thkdg\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " pod="kube-system/cilium-thkdg" Dec 12 17:34:15.680855 kubelet[2679]: I1212 17:34:15.680865 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-cilium-run\") pod \"cilium-thkdg\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " pod="kube-system/cilium-thkdg" Dec 12 17:34:15.681042 kubelet[2679]: I1212 17:34:15.680880 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-clustermesh-secrets\") pod \"cilium-thkdg\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " pod="kube-system/cilium-thkdg" Dec 12 17:34:15.681042 kubelet[2679]: I1212 17:34:15.680898 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aff1acea-862a-4a9e-9597-0629f548d1ed-xtables-lock\") pod \"kube-proxy-wfgxf\" (UID: \"aff1acea-862a-4a9e-9597-0629f548d1ed\") " pod="kube-system/kube-proxy-wfgxf" Dec 12 17:34:15.681042 kubelet[2679]: I1212 17:34:15.680912 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aff1acea-862a-4a9e-9597-0629f548d1ed-kube-proxy\") pod \"kube-proxy-wfgxf\" (UID: \"aff1acea-862a-4a9e-9597-0629f548d1ed\") " pod="kube-system/kube-proxy-wfgxf" Dec 12 17:34:15.681042 kubelet[2679]: I1212 17:34:15.680926 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-cni-path\") pod \"cilium-thkdg\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " pod="kube-system/cilium-thkdg" Dec 12 17:34:15.681042 kubelet[2679]: I1212 17:34:15.680940 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-lib-modules\") pod \"cilium-thkdg\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " pod="kube-system/cilium-thkdg" Dec 12 17:34:15.681042 kubelet[2679]: I1212 17:34:15.680956 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwjs4\" (UniqueName: \"kubernetes.io/projected/aff1acea-862a-4a9e-9597-0629f548d1ed-kube-api-access-jwjs4\") pod \"kube-proxy-wfgxf\" (UID: \"aff1acea-862a-4a9e-9597-0629f548d1ed\") " pod="kube-system/kube-proxy-wfgxf" Dec 12 17:34:15.681165 kubelet[2679]: I1212 17:34:15.680971 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-etc-cni-netd\") pod \"cilium-thkdg\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " pod="kube-system/cilium-thkdg" Dec 12 17:34:15.681165 kubelet[2679]: I1212 17:34:15.680986 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-xtables-lock\") pod \"cilium-thkdg\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " pod="kube-system/cilium-thkdg" Dec 12 17:34:15.681165 kubelet[2679]: I1212 17:34:15.681000 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aff1acea-862a-4a9e-9597-0629f548d1ed-lib-modules\") pod \"kube-proxy-wfgxf\" (UID: \"aff1acea-862a-4a9e-9597-0629f548d1ed\") " pod="kube-system/kube-proxy-wfgxf" Dec 12 17:34:15.681165 kubelet[2679]: I1212 17:34:15.681014 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6m8x\" (UniqueName: \"kubernetes.io/projected/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-kube-api-access-g6m8x\") pod \"cilium-thkdg\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " pod="kube-system/cilium-thkdg" Dec 12 17:34:15.681165 kubelet[2679]: I1212 17:34:15.681030 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-cilium-config-path\") pod \"cilium-thkdg\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " pod="kube-system/cilium-thkdg" Dec 12 17:34:15.681265 kubelet[2679]: I1212 17:34:15.681048 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-host-proc-sys-net\") pod \"cilium-thkdg\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " pod="kube-system/cilium-thkdg" Dec 12 17:34:15.681265 kubelet[2679]: I1212 17:34:15.681062 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-hubble-tls\") pod \"cilium-thkdg\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " pod="kube-system/cilium-thkdg" Dec 12 17:34:15.681265 kubelet[2679]: I1212 17:34:15.681087 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-bpf-maps\") pod \"cilium-thkdg\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " pod="kube-system/cilium-thkdg" Dec 12 17:34:15.681265 kubelet[2679]: I1212 17:34:15.681129 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-host-proc-sys-kernel\") pod \"cilium-thkdg\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " pod="kube-system/cilium-thkdg" Dec 12 17:34:15.687558 containerd[1530]: time="2025-12-12T17:34:15.687503769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-44h5m,Uid:1d6f458e-a5a8-4b88-98f1-e0f9460b56a0,Namespace:kube-system,Attempt:0,}" Dec 12 17:34:15.707502 containerd[1530]: time="2025-12-12T17:34:15.706985716Z" level=info msg="connecting to shim 1080e33973f1ff78d3d09e4f5c68661377375a659c13ae62974dd99189ac6507" address="unix:///run/containerd/s/6ca631dd9ed820464e39b349cb519a1c5973c02aaeb8f82610fbc1059f629245" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:34:15.734649 systemd[1]: Started cri-containerd-1080e33973f1ff78d3d09e4f5c68661377375a659c13ae62974dd99189ac6507.scope - libcontainer container 1080e33973f1ff78d3d09e4f5c68661377375a659c13ae62974dd99189ac6507. Dec 12 17:34:15.772861 containerd[1530]: time="2025-12-12T17:34:15.772819107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-44h5m,Uid:1d6f458e-a5a8-4b88-98f1-e0f9460b56a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"1080e33973f1ff78d3d09e4f5c68661377375a659c13ae62974dd99189ac6507\"" Dec 12 17:34:15.774781 containerd[1530]: time="2025-12-12T17:34:15.774673925Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 12 17:34:15.935925 containerd[1530]: time="2025-12-12T17:34:15.935880945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wfgxf,Uid:aff1acea-862a-4a9e-9597-0629f548d1ed,Namespace:kube-system,Attempt:0,}" Dec 12 17:34:15.942888 containerd[1530]: time="2025-12-12T17:34:15.942843352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-thkdg,Uid:03a8a009-c38d-4c0d-b25c-98d1cef4c24b,Namespace:kube-system,Attempt:0,}" Dec 12 17:34:15.958236 containerd[1530]: time="2025-12-12T17:34:15.958183281Z" level=info msg="connecting to shim c2c0d2fb63221abde7d01a670fdc478d0867f54999f06f4ed881a0825b598e9a" address="unix:///run/containerd/s/75378d0cbeb918b08680e41fe3ee17b6cb6b197626dfbd9bd8cdfeb696eeaca2" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:34:15.961880 containerd[1530]: time="2025-12-12T17:34:15.961845554Z" level=info msg="connecting to shim 71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c" address="unix:///run/containerd/s/5fed5b78502d245040f7442d53c52afa0cceb67fa6ab2d73a7ab3d609d8bb48f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:34:15.983703 systemd[1]: Started cri-containerd-c2c0d2fb63221abde7d01a670fdc478d0867f54999f06f4ed881a0825b598e9a.scope - libcontainer container c2c0d2fb63221abde7d01a670fdc478d0867f54999f06f4ed881a0825b598e9a. Dec 12 17:34:15.986714 systemd[1]: Started cri-containerd-71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c.scope - libcontainer container 71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c. Dec 12 17:34:16.027316 containerd[1530]: time="2025-12-12T17:34:16.027277642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wfgxf,Uid:aff1acea-862a-4a9e-9597-0629f548d1ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2c0d2fb63221abde7d01a670fdc478d0867f54999f06f4ed881a0825b598e9a\"" Dec 12 17:34:16.033863 containerd[1530]: time="2025-12-12T17:34:16.033821376Z" level=info msg="CreateContainer within sandbox \"c2c0d2fb63221abde7d01a670fdc478d0867f54999f06f4ed881a0825b598e9a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:34:16.034810 containerd[1530]: time="2025-12-12T17:34:16.034767264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-thkdg,Uid:03a8a009-c38d-4c0d-b25c-98d1cef4c24b,Namespace:kube-system,Attempt:0,} returns sandbox id \"71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c\"" Dec 12 17:34:16.042961 containerd[1530]: time="2025-12-12T17:34:16.042922481Z" level=info msg="Container c38037d1b67b4e0a16cf7e7223fd9f88e0d637642c61a5d28e0879cc4df5fd96: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:34:16.050249 containerd[1530]: time="2025-12-12T17:34:16.050196772Z" level=info msg="CreateContainer within sandbox \"c2c0d2fb63221abde7d01a670fdc478d0867f54999f06f4ed881a0825b598e9a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c38037d1b67b4e0a16cf7e7223fd9f88e0d637642c61a5d28e0879cc4df5fd96\"" Dec 12 17:34:16.050803 containerd[1530]: time="2025-12-12T17:34:16.050779522Z" level=info msg="StartContainer for \"c38037d1b67b4e0a16cf7e7223fd9f88e0d637642c61a5d28e0879cc4df5fd96\"" Dec 12 17:34:16.052221 containerd[1530]: time="2025-12-12T17:34:16.052191394Z" level=info msg="connecting to shim c38037d1b67b4e0a16cf7e7223fd9f88e0d637642c61a5d28e0879cc4df5fd96" address="unix:///run/containerd/s/75378d0cbeb918b08680e41fe3ee17b6cb6b197626dfbd9bd8cdfeb696eeaca2" protocol=ttrpc version=3 Dec 12 17:34:16.076649 systemd[1]: Started cri-containerd-c38037d1b67b4e0a16cf7e7223fd9f88e0d637642c61a5d28e0879cc4df5fd96.scope - libcontainer container c38037d1b67b4e0a16cf7e7223fd9f88e0d637642c61a5d28e0879cc4df5fd96. Dec 12 17:34:16.155501 containerd[1530]: time="2025-12-12T17:34:16.155451189Z" level=info msg="StartContainer for \"c38037d1b67b4e0a16cf7e7223fd9f88e0d637642c61a5d28e0879cc4df5fd96\" returns successfully" Dec 12 17:34:16.424865 kubelet[2679]: I1212 17:34:16.424811 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wfgxf" podStartSLOduration=1.424227078 podStartE2EDuration="1.424227078s" podCreationTimestamp="2025-12-12 17:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:34:16.424109272 +0000 UTC m=+7.175268669" watchObservedRunningTime="2025-12-12 17:34:16.424227078 +0000 UTC m=+7.175386475" Dec 12 17:34:17.242717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3687464688.mount: Deactivated successfully. Dec 12 17:34:17.538126 containerd[1530]: time="2025-12-12T17:34:17.537991232Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:17.538616 containerd[1530]: time="2025-12-12T17:34:17.538588862Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Dec 12 17:34:17.539436 containerd[1530]: time="2025-12-12T17:34:17.539414743Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:17.540994 containerd[1530]: time="2025-12-12T17:34:17.540666085Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.765745586s" Dec 12 17:34:17.540994 containerd[1530]: time="2025-12-12T17:34:17.540703686Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 12 17:34:17.541938 containerd[1530]: time="2025-12-12T17:34:17.541891825Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 12 17:34:17.545337 containerd[1530]: time="2025-12-12T17:34:17.545309314Z" level=info msg="CreateContainer within sandbox \"1080e33973f1ff78d3d09e4f5c68661377375a659c13ae62974dd99189ac6507\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 12 17:34:17.556532 containerd[1530]: time="2025-12-12T17:34:17.556254736Z" level=info msg="Container 62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:34:17.558423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3916207.mount: Deactivated successfully. Dec 12 17:34:17.566046 containerd[1530]: time="2025-12-12T17:34:17.565994618Z" level=info msg="CreateContainer within sandbox \"1080e33973f1ff78d3d09e4f5c68661377375a659c13ae62974dd99189ac6507\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2\"" Dec 12 17:34:17.566637 containerd[1530]: time="2025-12-12T17:34:17.566592087Z" level=info msg="StartContainer for \"62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2\"" Dec 12 17:34:17.567637 containerd[1530]: time="2025-12-12T17:34:17.567586137Z" level=info msg="connecting to shim 62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2" address="unix:///run/containerd/s/6ca631dd9ed820464e39b349cb519a1c5973c02aaeb8f82610fbc1059f629245" protocol=ttrpc version=3 Dec 12 17:34:17.618786 systemd[1]: Started cri-containerd-62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2.scope - libcontainer container 62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2. Dec 12 17:34:17.645291 containerd[1530]: time="2025-12-12T17:34:17.645255540Z" level=info msg="StartContainer for \"62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2\" returns successfully" Dec 12 17:34:18.455122 kubelet[2679]: I1212 17:34:18.455060 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-44h5m" podStartSLOduration=1.687472031 podStartE2EDuration="3.45504507s" podCreationTimestamp="2025-12-12 17:34:15 +0000 UTC" firstStartedPulling="2025-12-12 17:34:15.77419634 +0000 UTC m=+6.525355737" lastFinishedPulling="2025-12-12 17:34:17.541769379 +0000 UTC m=+8.292928776" observedRunningTime="2025-12-12 17:34:18.454920104 +0000 UTC m=+9.206079501" watchObservedRunningTime="2025-12-12 17:34:18.45504507 +0000 UTC m=+9.206204547" Dec 12 17:34:22.366800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3455942174.mount: Deactivated successfully. Dec 12 17:34:23.655244 containerd[1530]: time="2025-12-12T17:34:23.655179311Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:23.655840 containerd[1530]: time="2025-12-12T17:34:23.655805537Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Dec 12 17:34:23.656727 containerd[1530]: time="2025-12-12T17:34:23.656671052Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:23.664177 containerd[1530]: time="2025-12-12T17:34:23.664130077Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.122205771s" Dec 12 17:34:23.664177 containerd[1530]: time="2025-12-12T17:34:23.664172119Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 12 17:34:23.669489 containerd[1530]: time="2025-12-12T17:34:23.669094840Z" level=info msg="CreateContainer within sandbox \"71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 17:34:23.676008 containerd[1530]: time="2025-12-12T17:34:23.675322975Z" level=info msg="Container b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:34:23.679168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3824997879.mount: Deactivated successfully. Dec 12 17:34:23.684703 containerd[1530]: time="2025-12-12T17:34:23.684658637Z" level=info msg="CreateContainer within sandbox \"71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa\"" Dec 12 17:34:23.685179 containerd[1530]: time="2025-12-12T17:34:23.685138617Z" level=info msg="StartContainer for \"b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa\"" Dec 12 17:34:23.685974 containerd[1530]: time="2025-12-12T17:34:23.685945970Z" level=info msg="connecting to shim b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa" address="unix:///run/containerd/s/5fed5b78502d245040f7442d53c52afa0cceb67fa6ab2d73a7ab3d609d8bb48f" protocol=ttrpc version=3 Dec 12 17:34:23.706611 systemd[1]: Started cri-containerd-b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa.scope - libcontainer container b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa. Dec 12 17:34:23.735831 containerd[1530]: time="2025-12-12T17:34:23.735791648Z" level=info msg="StartContainer for \"b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa\" returns successfully" Dec 12 17:34:23.750336 systemd[1]: cri-containerd-b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa.scope: Deactivated successfully. Dec 12 17:34:23.796277 containerd[1530]: time="2025-12-12T17:34:23.796230640Z" level=info msg="received container exit event container_id:\"b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa\" id:\"b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa\" pid:3161 exited_at:{seconds:1765560863 nanos:789936583}" Dec 12 17:34:23.832255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa-rootfs.mount: Deactivated successfully. Dec 12 17:34:24.453175 containerd[1530]: time="2025-12-12T17:34:24.453135251Z" level=info msg="CreateContainer within sandbox \"71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 17:34:24.460760 containerd[1530]: time="2025-12-12T17:34:24.460537625Z" level=info msg="Container 57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:34:24.467341 containerd[1530]: time="2025-12-12T17:34:24.467302613Z" level=info msg="CreateContainer within sandbox \"71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3\"" Dec 12 17:34:24.467931 containerd[1530]: time="2025-12-12T17:34:24.467879995Z" level=info msg="StartContainer for \"57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3\"" Dec 12 17:34:24.468772 containerd[1530]: time="2025-12-12T17:34:24.468748590Z" level=info msg="connecting to shim 57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3" address="unix:///run/containerd/s/5fed5b78502d245040f7442d53c52afa0cceb67fa6ab2d73a7ab3d609d8bb48f" protocol=ttrpc version=3 Dec 12 17:34:24.472541 update_engine[1520]: I20251212 17:34:24.472494 1520 update_attempter.cc:509] Updating boot flags... Dec 12 17:34:24.489587 systemd[1]: Started cri-containerd-57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3.scope - libcontainer container 57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3. Dec 12 17:34:24.570349 containerd[1530]: time="2025-12-12T17:34:24.570310974Z" level=info msg="StartContainer for \"57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3\" returns successfully" Dec 12 17:34:24.619088 systemd[1]: cri-containerd-57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3.scope: Deactivated successfully. Dec 12 17:34:24.619980 containerd[1530]: time="2025-12-12T17:34:24.619875178Z" level=info msg="received container exit event container_id:\"57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3\" id:\"57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3\" pid:3214 exited_at:{seconds:1765560864 nanos:619490563}" Dec 12 17:34:24.621171 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:34:24.621303 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:34:24.621962 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:34:24.624899 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:34:24.651238 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:34:25.458974 containerd[1530]: time="2025-12-12T17:34:25.458738174Z" level=info msg="CreateContainer within sandbox \"71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 17:34:25.469788 containerd[1530]: time="2025-12-12T17:34:25.469750397Z" level=info msg="Container db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:34:25.478149 containerd[1530]: time="2025-12-12T17:34:25.478109198Z" level=info msg="CreateContainer within sandbox \"71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6\"" Dec 12 17:34:25.479851 containerd[1530]: time="2025-12-12T17:34:25.479735620Z" level=info msg="StartContainer for \"db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6\"" Dec 12 17:34:25.482723 containerd[1530]: time="2025-12-12T17:34:25.482680013Z" level=info msg="connecting to shim db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6" address="unix:///run/containerd/s/5fed5b78502d245040f7442d53c52afa0cceb67fa6ab2d73a7ab3d609d8bb48f" protocol=ttrpc version=3 Dec 12 17:34:25.510629 systemd[1]: Started cri-containerd-db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6.scope - libcontainer container db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6. Dec 12 17:34:25.587694 systemd[1]: cri-containerd-db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6.scope: Deactivated successfully. Dec 12 17:34:25.589314 containerd[1530]: time="2025-12-12T17:34:25.589266944Z" level=info msg="received container exit event container_id:\"db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6\" id:\"db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6\" pid:3275 exited_at:{seconds:1765560865 nanos:589039896}" Dec 12 17:34:25.590592 containerd[1530]: time="2025-12-12T17:34:25.590566514Z" level=info msg="StartContainer for \"db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6\" returns successfully" Dec 12 17:34:25.608253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6-rootfs.mount: Deactivated successfully. Dec 12 17:34:26.464988 containerd[1530]: time="2025-12-12T17:34:26.464927001Z" level=info msg="CreateContainer within sandbox \"71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 17:34:26.474612 containerd[1530]: time="2025-12-12T17:34:26.474563679Z" level=info msg="Container 8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:34:26.483206 containerd[1530]: time="2025-12-12T17:34:26.483156239Z" level=info msg="CreateContainer within sandbox \"71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53\"" Dec 12 17:34:26.484006 containerd[1530]: time="2025-12-12T17:34:26.483713419Z" level=info msg="StartContainer for \"8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53\"" Dec 12 17:34:26.484722 containerd[1530]: time="2025-12-12T17:34:26.484691856Z" level=info msg="connecting to shim 8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53" address="unix:///run/containerd/s/5fed5b78502d245040f7442d53c52afa0cceb67fa6ab2d73a7ab3d609d8bb48f" protocol=ttrpc version=3 Dec 12 17:34:26.510679 systemd[1]: Started cri-containerd-8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53.scope - libcontainer container 8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53. Dec 12 17:34:26.539304 systemd[1]: cri-containerd-8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53.scope: Deactivated successfully. Dec 12 17:34:26.546490 containerd[1530]: time="2025-12-12T17:34:26.546362469Z" level=info msg="received container exit event container_id:\"8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53\" id:\"8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53\" pid:3316 exited_at:{seconds:1765560866 nanos:539922589}" Dec 12 17:34:26.556211 containerd[1530]: time="2025-12-12T17:34:26.556167314Z" level=info msg="StartContainer for \"8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53\" returns successfully" Dec 12 17:34:26.576990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53-rootfs.mount: Deactivated successfully. Dec 12 17:34:27.471973 containerd[1530]: time="2025-12-12T17:34:27.471914140Z" level=info msg="CreateContainer within sandbox \"71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 17:34:27.496726 containerd[1530]: time="2025-12-12T17:34:27.495890764Z" level=info msg="Container c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:34:27.503283 containerd[1530]: time="2025-12-12T17:34:27.503224428Z" level=info msg="CreateContainer within sandbox \"71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9\"" Dec 12 17:34:27.505094 containerd[1530]: time="2025-12-12T17:34:27.503962815Z" level=info msg="StartContainer for \"c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9\"" Dec 12 17:34:27.505094 containerd[1530]: time="2025-12-12T17:34:27.504892328Z" level=info msg="connecting to shim c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9" address="unix:///run/containerd/s/5fed5b78502d245040f7442d53c52afa0cceb67fa6ab2d73a7ab3d609d8bb48f" protocol=ttrpc version=3 Dec 12 17:34:27.525652 systemd[1]: Started cri-containerd-c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9.scope - libcontainer container c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9. Dec 12 17:34:27.569702 containerd[1530]: time="2025-12-12T17:34:27.569574818Z" level=info msg="StartContainer for \"c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9\" returns successfully" Dec 12 17:34:27.681059 kubelet[2679]: I1212 17:34:27.681021 2679 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 17:34:27.739439 systemd[1]: Created slice kubepods-burstable-podd55cdee8_3a7d_4e7a_88f3_09c168d2fc3a.slice - libcontainer container kubepods-burstable-podd55cdee8_3a7d_4e7a_88f3_09c168d2fc3a.slice. Dec 12 17:34:27.746419 systemd[1]: Created slice kubepods-burstable-pod9cc15d43_fd3a_44f7_b241_7b241352171a.slice - libcontainer container kubepods-burstable-pod9cc15d43_fd3a_44f7_b241_7b241352171a.slice. Dec 12 17:34:27.772328 kubelet[2679]: I1212 17:34:27.772287 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cc15d43-fd3a-44f7-b241-7b241352171a-config-volume\") pod \"coredns-674b8bbfcf-jw69z\" (UID: \"9cc15d43-fd3a-44f7-b241-7b241352171a\") " pod="kube-system/coredns-674b8bbfcf-jw69z" Dec 12 17:34:27.772328 kubelet[2679]: I1212 17:34:27.772331 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpv9f\" (UniqueName: \"kubernetes.io/projected/9cc15d43-fd3a-44f7-b241-7b241352171a-kube-api-access-gpv9f\") pod \"coredns-674b8bbfcf-jw69z\" (UID: \"9cc15d43-fd3a-44f7-b241-7b241352171a\") " pod="kube-system/coredns-674b8bbfcf-jw69z" Dec 12 17:34:27.772517 kubelet[2679]: I1212 17:34:27.772351 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d55cdee8-3a7d-4e7a-88f3-09c168d2fc3a-config-volume\") pod \"coredns-674b8bbfcf-ppfl4\" (UID: \"d55cdee8-3a7d-4e7a-88f3-09c168d2fc3a\") " pod="kube-system/coredns-674b8bbfcf-ppfl4" Dec 12 17:34:27.772517 kubelet[2679]: I1212 17:34:27.772371 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q74sr\" (UniqueName: \"kubernetes.io/projected/d55cdee8-3a7d-4e7a-88f3-09c168d2fc3a-kube-api-access-q74sr\") pod \"coredns-674b8bbfcf-ppfl4\" (UID: \"d55cdee8-3a7d-4e7a-88f3-09c168d2fc3a\") " pod="kube-system/coredns-674b8bbfcf-ppfl4" Dec 12 17:34:28.046149 containerd[1530]: time="2025-12-12T17:34:28.046043171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ppfl4,Uid:d55cdee8-3a7d-4e7a-88f3-09c168d2fc3a,Namespace:kube-system,Attempt:0,}" Dec 12 17:34:28.050476 containerd[1530]: time="2025-12-12T17:34:28.050429844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jw69z,Uid:9cc15d43-fd3a-44f7-b241-7b241352171a,Namespace:kube-system,Attempt:0,}" Dec 12 17:34:29.650529 systemd-networkd[1439]: cilium_host: Link UP Dec 12 17:34:29.650701 systemd-networkd[1439]: cilium_net: Link UP Dec 12 17:34:29.650903 systemd-networkd[1439]: cilium_net: Gained carrier Dec 12 17:34:29.651097 systemd-networkd[1439]: cilium_host: Gained carrier Dec 12 17:34:29.651262 systemd-networkd[1439]: cilium_net: Gained IPv6LL Dec 12 17:34:29.740184 systemd-networkd[1439]: cilium_vxlan: Link UP Dec 12 17:34:29.740192 systemd-networkd[1439]: cilium_vxlan: Gained carrier Dec 12 17:34:30.011497 kernel: NET: Registered PF_ALG protocol family Dec 12 17:34:30.344601 systemd-networkd[1439]: cilium_host: Gained IPv6LL Dec 12 17:34:30.590298 systemd-networkd[1439]: lxc_health: Link UP Dec 12 17:34:30.593594 systemd-networkd[1439]: lxc_health: Gained carrier Dec 12 17:34:31.085942 systemd-networkd[1439]: lxcfb2b6796f379: Link UP Dec 12 17:34:31.097470 systemd-networkd[1439]: lxc866e8b703b27: Link UP Dec 12 17:34:31.107474 kernel: eth0: renamed from tmp5c7c3 Dec 12 17:34:31.108464 kernel: eth0: renamed from tmp3f7be Dec 12 17:34:31.109018 systemd-networkd[1439]: lxcfb2b6796f379: Gained carrier Dec 12 17:34:31.109437 systemd-networkd[1439]: lxc866e8b703b27: Gained carrier Dec 12 17:34:31.624633 systemd-networkd[1439]: cilium_vxlan: Gained IPv6LL Dec 12 17:34:31.976801 kubelet[2679]: I1212 17:34:31.976739 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-thkdg" podStartSLOduration=9.34899613 podStartE2EDuration="16.976724169s" podCreationTimestamp="2025-12-12 17:34:15 +0000 UTC" firstStartedPulling="2025-12-12 17:34:16.037488123 +0000 UTC m=+6.788647520" lastFinishedPulling="2025-12-12 17:34:23.665216162 +0000 UTC m=+14.416375559" observedRunningTime="2025-12-12 17:34:28.501858758 +0000 UTC m=+19.253018155" watchObservedRunningTime="2025-12-12 17:34:31.976724169 +0000 UTC m=+22.727883566" Dec 12 17:34:32.072603 systemd-networkd[1439]: lxc_health: Gained IPv6LL Dec 12 17:34:32.520598 systemd-networkd[1439]: lxc866e8b703b27: Gained IPv6LL Dec 12 17:34:32.904586 systemd-networkd[1439]: lxcfb2b6796f379: Gained IPv6LL Dec 12 17:34:34.048474 kubelet[2679]: I1212 17:34:34.048420 2679 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:34:34.771623 containerd[1530]: time="2025-12-12T17:34:34.771543675Z" level=info msg="connecting to shim 3f7be646d48712023d01d24c1f51521526434b8a61ac5b7ab2c54f1aa21a0cbd" address="unix:///run/containerd/s/bf6fa50e57670867f988bb2b70abb96669adc0d029dd532750e617d3aa13e125" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:34:34.773518 containerd[1530]: time="2025-12-12T17:34:34.773475211Z" level=info msg="connecting to shim 5c7c326be98f56beeebd745dccdab1a8752b1cd64754cd66ee2da980409d3dd9" address="unix:///run/containerd/s/76344094084f77e1b7e76b80e0cff0fe83fcbeb9a3bdc11dd029768059d60b3b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:34:34.793610 systemd[1]: Started cri-containerd-3f7be646d48712023d01d24c1f51521526434b8a61ac5b7ab2c54f1aa21a0cbd.scope - libcontainer container 3f7be646d48712023d01d24c1f51521526434b8a61ac5b7ab2c54f1aa21a0cbd. Dec 12 17:34:34.798525 systemd[1]: Started cri-containerd-5c7c326be98f56beeebd745dccdab1a8752b1cd64754cd66ee2da980409d3dd9.scope - libcontainer container 5c7c326be98f56beeebd745dccdab1a8752b1cd64754cd66ee2da980409d3dd9. Dec 12 17:34:34.810431 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:34:34.813487 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:34:34.838458 containerd[1530]: time="2025-12-12T17:34:34.838402084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ppfl4,Uid:d55cdee8-3a7d-4e7a-88f3-09c168d2fc3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c7c326be98f56beeebd745dccdab1a8752b1cd64754cd66ee2da980409d3dd9\"" Dec 12 17:34:34.841979 containerd[1530]: time="2025-12-12T17:34:34.841947266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jw69z,Uid:9cc15d43-fd3a-44f7-b241-7b241352171a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f7be646d48712023d01d24c1f51521526434b8a61ac5b7ab2c54f1aa21a0cbd\"" Dec 12 17:34:34.851375 containerd[1530]: time="2025-12-12T17:34:34.851324977Z" level=info msg="CreateContainer within sandbox \"5c7c326be98f56beeebd745dccdab1a8752b1cd64754cd66ee2da980409d3dd9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:34:34.853207 containerd[1530]: time="2025-12-12T17:34:34.852594773Z" level=info msg="CreateContainer within sandbox \"3f7be646d48712023d01d24c1f51521526434b8a61ac5b7ab2c54f1aa21a0cbd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:34:34.861879 containerd[1530]: time="2025-12-12T17:34:34.861827080Z" level=info msg="Container f315a20a7409212532d5c8715915cd14bef8ad2bc4480c308298749557821b54: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:34:34.862599 containerd[1530]: time="2025-12-12T17:34:34.862577141Z" level=info msg="Container d6d45fd1f6d3b6ee40bc56ab61e666b6ffe1b9abff785463da361bd024963e50: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:34:34.862940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount457699225.mount: Deactivated successfully. Dec 12 17:34:34.867628 containerd[1530]: time="2025-12-12T17:34:34.867597446Z" level=info msg="CreateContainer within sandbox \"3f7be646d48712023d01d24c1f51521526434b8a61ac5b7ab2c54f1aa21a0cbd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f315a20a7409212532d5c8715915cd14bef8ad2bc4480c308298749557821b54\"" Dec 12 17:34:34.868251 containerd[1530]: time="2025-12-12T17:34:34.868227984Z" level=info msg="StartContainer for \"f315a20a7409212532d5c8715915cd14bef8ad2bc4480c308298749557821b54\"" Dec 12 17:34:34.870351 containerd[1530]: time="2025-12-12T17:34:34.869835911Z" level=info msg="connecting to shim f315a20a7409212532d5c8715915cd14bef8ad2bc4480c308298749557821b54" address="unix:///run/containerd/s/bf6fa50e57670867f988bb2b70abb96669adc0d029dd532750e617d3aa13e125" protocol=ttrpc version=3 Dec 12 17:34:34.872438 containerd[1530]: time="2025-12-12T17:34:34.872394185Z" level=info msg="CreateContainer within sandbox \"5c7c326be98f56beeebd745dccdab1a8752b1cd64754cd66ee2da980409d3dd9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d6d45fd1f6d3b6ee40bc56ab61e666b6ffe1b9abff785463da361bd024963e50\"" Dec 12 17:34:34.874463 containerd[1530]: time="2025-12-12T17:34:34.874105754Z" level=info msg="StartContainer for \"d6d45fd1f6d3b6ee40bc56ab61e666b6ffe1b9abff785463da361bd024963e50\"" Dec 12 17:34:34.875112 containerd[1530]: time="2025-12-12T17:34:34.875077662Z" level=info msg="connecting to shim d6d45fd1f6d3b6ee40bc56ab61e666b6ffe1b9abff785463da361bd024963e50" address="unix:///run/containerd/s/76344094084f77e1b7e76b80e0cff0fe83fcbeb9a3bdc11dd029768059d60b3b" protocol=ttrpc version=3 Dec 12 17:34:34.899660 systemd[1]: Started cri-containerd-d6d45fd1f6d3b6ee40bc56ab61e666b6ffe1b9abff785463da361bd024963e50.scope - libcontainer container d6d45fd1f6d3b6ee40bc56ab61e666b6ffe1b9abff785463da361bd024963e50. Dec 12 17:34:34.900884 systemd[1]: Started cri-containerd-f315a20a7409212532d5c8715915cd14bef8ad2bc4480c308298749557821b54.scope - libcontainer container f315a20a7409212532d5c8715915cd14bef8ad2bc4480c308298749557821b54. Dec 12 17:34:34.933368 containerd[1530]: time="2025-12-12T17:34:34.933327382Z" level=info msg="StartContainer for \"d6d45fd1f6d3b6ee40bc56ab61e666b6ffe1b9abff785463da361bd024963e50\" returns successfully" Dec 12 17:34:34.934652 containerd[1530]: time="2025-12-12T17:34:34.934604219Z" level=info msg="StartContainer for \"f315a20a7409212532d5c8715915cd14bef8ad2bc4480c308298749557821b54\" returns successfully" Dec 12 17:34:35.508779 kubelet[2679]: I1212 17:34:35.508488 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ppfl4" podStartSLOduration=20.508469634 podStartE2EDuration="20.508469634s" podCreationTimestamp="2025-12-12 17:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:34:35.508152385 +0000 UTC m=+26.259311742" watchObservedRunningTime="2025-12-12 17:34:35.508469634 +0000 UTC m=+26.259629031" Dec 12 17:34:35.524857 kubelet[2679]: I1212 17:34:35.523630 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jw69z" podStartSLOduration=20.523612057 podStartE2EDuration="20.523612057s" podCreationTimestamp="2025-12-12 17:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:34:35.52299504 +0000 UTC m=+26.274155157" watchObservedRunningTime="2025-12-12 17:34:35.523612057 +0000 UTC m=+26.274771454" Dec 12 17:34:37.435580 systemd[1]: Started sshd@7-10.0.0.71:22-10.0.0.1:34088.service - OpenSSH per-connection server daemon (10.0.0.1:34088). Dec 12 17:34:37.507472 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 34088 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:34:37.508377 sshd-session[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:34:37.513521 systemd-logind[1510]: New session 8 of user core. Dec 12 17:34:37.522619 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:34:37.651607 sshd[4040]: Connection closed by 10.0.0.1 port 34088 Dec 12 17:34:37.652101 sshd-session[4037]: pam_unix(sshd:session): session closed for user core Dec 12 17:34:37.655738 systemd[1]: sshd@7-10.0.0.71:22-10.0.0.1:34088.service: Deactivated successfully. Dec 12 17:34:37.657480 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:34:37.659021 systemd-logind[1510]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:34:37.660346 systemd-logind[1510]: Removed session 8. Dec 12 17:34:42.667651 systemd[1]: Started sshd@8-10.0.0.71:22-10.0.0.1:33932.service - OpenSSH per-connection server daemon (10.0.0.1:33932). Dec 12 17:34:42.736429 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 33932 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:34:42.735360 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:34:42.740502 systemd-logind[1510]: New session 9 of user core. Dec 12 17:34:42.750669 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:34:42.872166 sshd[4058]: Connection closed by 10.0.0.1 port 33932 Dec 12 17:34:42.870947 sshd-session[4055]: pam_unix(sshd:session): session closed for user core Dec 12 17:34:42.875393 systemd[1]: sshd@8-10.0.0.71:22-10.0.0.1:33932.service: Deactivated successfully. Dec 12 17:34:42.877188 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:34:42.878420 systemd-logind[1510]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:34:42.879873 systemd-logind[1510]: Removed session 9. Dec 12 17:34:47.884210 systemd[1]: Started sshd@9-10.0.0.71:22-10.0.0.1:33942.service - OpenSSH per-connection server daemon (10.0.0.1:33942). Dec 12 17:34:47.972948 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 33942 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:34:47.975220 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:34:47.983209 systemd-logind[1510]: New session 10 of user core. Dec 12 17:34:47.998742 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:34:48.149770 sshd[4078]: Connection closed by 10.0.0.1 port 33942 Dec 12 17:34:48.150648 sshd-session[4075]: pam_unix(sshd:session): session closed for user core Dec 12 17:34:48.155318 systemd[1]: sshd@9-10.0.0.71:22-10.0.0.1:33942.service: Deactivated successfully. Dec 12 17:34:48.157053 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:34:48.159802 systemd-logind[1510]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:34:48.161054 systemd-logind[1510]: Removed session 10. Dec 12 17:34:53.170701 systemd[1]: Started sshd@10-10.0.0.71:22-10.0.0.1:42486.service - OpenSSH per-connection server daemon (10.0.0.1:42486). Dec 12 17:34:53.237959 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 42486 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:34:53.241008 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:34:53.245916 systemd-logind[1510]: New session 11 of user core. Dec 12 17:34:53.263715 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:34:53.388661 sshd[4095]: Connection closed by 10.0.0.1 port 42486 Dec 12 17:34:53.388979 sshd-session[4092]: pam_unix(sshd:session): session closed for user core Dec 12 17:34:53.403126 systemd[1]: sshd@10-10.0.0.71:22-10.0.0.1:42486.service: Deactivated successfully. Dec 12 17:34:53.406431 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:34:53.408633 systemd-logind[1510]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:34:53.411833 systemd[1]: Started sshd@11-10.0.0.71:22-10.0.0.1:42492.service - OpenSSH per-connection server daemon (10.0.0.1:42492). Dec 12 17:34:53.412404 systemd-logind[1510]: Removed session 11. Dec 12 17:34:53.482630 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 42492 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:34:53.484010 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:34:53.489104 systemd-logind[1510]: New session 12 of user core. Dec 12 17:34:53.496695 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:34:53.676620 sshd[4113]: Connection closed by 10.0.0.1 port 42492 Dec 12 17:34:53.677790 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Dec 12 17:34:53.697198 systemd[1]: sshd@11-10.0.0.71:22-10.0.0.1:42492.service: Deactivated successfully. Dec 12 17:34:53.701394 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:34:53.704251 systemd-logind[1510]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:34:53.709092 systemd[1]: Started sshd@12-10.0.0.71:22-10.0.0.1:42494.service - OpenSSH per-connection server daemon (10.0.0.1:42494). Dec 12 17:34:53.711417 systemd-logind[1510]: Removed session 12. Dec 12 17:34:53.772575 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 42494 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:34:53.774174 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:34:53.779299 systemd-logind[1510]: New session 13 of user core. Dec 12 17:34:53.789655 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:34:53.912810 sshd[4129]: Connection closed by 10.0.0.1 port 42494 Dec 12 17:34:53.913574 sshd-session[4126]: pam_unix(sshd:session): session closed for user core Dec 12 17:34:53.917263 systemd[1]: sshd@12-10.0.0.71:22-10.0.0.1:42494.service: Deactivated successfully. Dec 12 17:34:53.919269 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:34:53.920185 systemd-logind[1510]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:34:53.921752 systemd-logind[1510]: Removed session 13. Dec 12 17:34:58.929476 systemd[1]: Started sshd@13-10.0.0.71:22-10.0.0.1:42506.service - OpenSSH per-connection server daemon (10.0.0.1:42506). Dec 12 17:34:58.993299 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 42506 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:34:58.994698 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:34:59.000115 systemd-logind[1510]: New session 14 of user core. Dec 12 17:34:59.010622 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:34:59.155679 sshd[4147]: Connection closed by 10.0.0.1 port 42506 Dec 12 17:34:59.156094 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Dec 12 17:34:59.160021 systemd-logind[1510]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:34:59.160878 systemd[1]: sshd@13-10.0.0.71:22-10.0.0.1:42506.service: Deactivated successfully. Dec 12 17:34:59.163306 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:34:59.166363 systemd-logind[1510]: Removed session 14. Dec 12 17:35:04.168190 systemd[1]: Started sshd@14-10.0.0.71:22-10.0.0.1:39568.service - OpenSSH per-connection server daemon (10.0.0.1:39568). Dec 12 17:35:04.219145 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 39568 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:35:04.220399 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:04.224618 systemd-logind[1510]: New session 15 of user core. Dec 12 17:35:04.235662 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:35:04.377544 sshd[4167]: Connection closed by 10.0.0.1 port 39568 Dec 12 17:35:04.378319 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:04.394815 systemd[1]: sshd@14-10.0.0.71:22-10.0.0.1:39568.service: Deactivated successfully. Dec 12 17:35:04.398394 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:35:04.401624 systemd-logind[1510]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:35:04.404011 systemd-logind[1510]: Removed session 15. Dec 12 17:35:04.406158 systemd[1]: Started sshd@15-10.0.0.71:22-10.0.0.1:39582.service - OpenSSH per-connection server daemon (10.0.0.1:39582). Dec 12 17:35:04.470598 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 39582 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:35:04.472015 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:04.476514 systemd-logind[1510]: New session 16 of user core. Dec 12 17:35:04.485648 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:35:04.702763 sshd[4184]: Connection closed by 10.0.0.1 port 39582 Dec 12 17:35:04.703944 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:04.713568 systemd[1]: sshd@15-10.0.0.71:22-10.0.0.1:39582.service: Deactivated successfully. Dec 12 17:35:04.715708 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:35:04.717901 systemd-logind[1510]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:35:04.720232 systemd[1]: Started sshd@16-10.0.0.71:22-10.0.0.1:39586.service - OpenSSH per-connection server daemon (10.0.0.1:39586). Dec 12 17:35:04.721737 systemd-logind[1510]: Removed session 16. Dec 12 17:35:04.776026 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 39586 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:35:04.778966 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:04.784859 systemd-logind[1510]: New session 17 of user core. Dec 12 17:35:04.794649 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:35:05.415355 sshd[4199]: Connection closed by 10.0.0.1 port 39586 Dec 12 17:35:05.415872 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:05.428307 systemd[1]: sshd@16-10.0.0.71:22-10.0.0.1:39586.service: Deactivated successfully. Dec 12 17:35:05.433406 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:35:05.435750 systemd-logind[1510]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:35:05.442037 systemd[1]: Started sshd@17-10.0.0.71:22-10.0.0.1:39598.service - OpenSSH per-connection server daemon (10.0.0.1:39598). Dec 12 17:35:05.443372 systemd-logind[1510]: Removed session 17. Dec 12 17:35:05.501004 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 39598 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:35:05.502702 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:05.507054 systemd-logind[1510]: New session 18 of user core. Dec 12 17:35:05.521671 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:35:05.754724 sshd[4224]: Connection closed by 10.0.0.1 port 39598 Dec 12 17:35:05.755101 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:05.763283 systemd[1]: sshd@17-10.0.0.71:22-10.0.0.1:39598.service: Deactivated successfully. Dec 12 17:35:05.765287 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:35:05.770697 systemd-logind[1510]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:35:05.773936 systemd[1]: Started sshd@18-10.0.0.71:22-10.0.0.1:39608.service - OpenSSH per-connection server daemon (10.0.0.1:39608). Dec 12 17:35:05.774414 systemd-logind[1510]: Removed session 18. Dec 12 17:35:05.837225 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 39608 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:35:05.838840 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:05.844373 systemd-logind[1510]: New session 19 of user core. Dec 12 17:35:05.851662 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:35:05.973530 sshd[4238]: Connection closed by 10.0.0.1 port 39608 Dec 12 17:35:05.974059 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:05.977724 systemd[1]: sshd@18-10.0.0.71:22-10.0.0.1:39608.service: Deactivated successfully. Dec 12 17:35:05.980351 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:35:05.981534 systemd-logind[1510]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:35:05.983010 systemd-logind[1510]: Removed session 19. Dec 12 17:35:10.986456 systemd[1]: Started sshd@19-10.0.0.71:22-10.0.0.1:42898.service - OpenSSH per-connection server daemon (10.0.0.1:42898). Dec 12 17:35:11.056876 sshd[4256]: Accepted publickey for core from 10.0.0.1 port 42898 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:35:11.058302 sshd-session[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:11.063072 systemd-logind[1510]: New session 20 of user core. Dec 12 17:35:11.074688 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:35:11.197362 sshd[4259]: Connection closed by 10.0.0.1 port 42898 Dec 12 17:35:11.197735 sshd-session[4256]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:11.203009 systemd[1]: sshd@19-10.0.0.71:22-10.0.0.1:42898.service: Deactivated successfully. Dec 12 17:35:11.205313 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:35:11.206328 systemd-logind[1510]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:35:11.208033 systemd-logind[1510]: Removed session 20. Dec 12 17:35:16.221004 systemd[1]: Started sshd@20-10.0.0.71:22-10.0.0.1:42900.service - OpenSSH per-connection server daemon (10.0.0.1:42900). Dec 12 17:35:16.289183 sshd[4272]: Accepted publickey for core from 10.0.0.1 port 42900 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:35:16.290897 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:16.295219 systemd-logind[1510]: New session 21 of user core. Dec 12 17:35:16.302605 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 17:35:16.417403 sshd[4275]: Connection closed by 10.0.0.1 port 42900 Dec 12 17:35:16.417779 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:16.431051 systemd[1]: sshd@20-10.0.0.71:22-10.0.0.1:42900.service: Deactivated successfully. Dec 12 17:35:16.432986 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 17:35:16.433854 systemd-logind[1510]: Session 21 logged out. Waiting for processes to exit. Dec 12 17:35:16.436079 systemd[1]: Started sshd@21-10.0.0.71:22-10.0.0.1:42910.service - OpenSSH per-connection server daemon (10.0.0.1:42910). Dec 12 17:35:16.437939 systemd-logind[1510]: Removed session 21. Dec 12 17:35:16.505904 sshd[4291]: Accepted publickey for core from 10.0.0.1 port 42910 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:35:16.509067 sshd-session[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:16.513605 systemd-logind[1510]: New session 22 of user core. Dec 12 17:35:16.529701 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 17:35:18.431464 containerd[1530]: time="2025-12-12T17:35:18.431263175Z" level=info msg="StopContainer for \"62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2\" with timeout 30 (s)" Dec 12 17:35:18.432332 containerd[1530]: time="2025-12-12T17:35:18.432133061Z" level=info msg="Stop container \"62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2\" with signal terminated" Dec 12 17:35:18.446405 systemd[1]: cri-containerd-62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2.scope: Deactivated successfully. Dec 12 17:35:18.449256 containerd[1530]: time="2025-12-12T17:35:18.449217383Z" level=info msg="received container exit event container_id:\"62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2\" id:\"62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2\" pid:3098 exited_at:{seconds:1765560918 nanos:448982381}" Dec 12 17:35:18.457644 containerd[1530]: time="2025-12-12T17:35:18.457578962Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:35:18.465214 containerd[1530]: time="2025-12-12T17:35:18.465082696Z" level=info msg="StopContainer for \"c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9\" with timeout 2 (s)" Dec 12 17:35:18.465593 containerd[1530]: time="2025-12-12T17:35:18.465560139Z" level=info msg="Stop container \"c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9\" with signal terminated" Dec 12 17:35:18.472919 systemd-networkd[1439]: lxc_health: Link DOWN Dec 12 17:35:18.472926 systemd-networkd[1439]: lxc_health: Lost carrier Dec 12 17:35:18.482341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2-rootfs.mount: Deactivated successfully. Dec 12 17:35:18.492829 systemd[1]: cri-containerd-c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9.scope: Deactivated successfully. Dec 12 17:35:18.493179 systemd[1]: cri-containerd-c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9.scope: Consumed 6.397s CPU time, 121.4M memory peak, 136K read from disk, 12.9M written to disk. Dec 12 17:35:18.495366 containerd[1530]: time="2025-12-12T17:35:18.495307992Z" level=info msg="received container exit event container_id:\"c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9\" id:\"c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9\" pid:3354 exited_at:{seconds:1765560918 nanos:495141830}" Dec 12 17:35:18.496573 containerd[1530]: time="2025-12-12T17:35:18.496546200Z" level=info msg="StopContainer for \"62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2\" returns successfully" Dec 12 17:35:18.499067 containerd[1530]: time="2025-12-12T17:35:18.499011178Z" level=info msg="StopPodSandbox for \"1080e33973f1ff78d3d09e4f5c68661377375a659c13ae62974dd99189ac6507\"" Dec 12 17:35:18.507552 containerd[1530]: time="2025-12-12T17:35:18.507411118Z" level=info msg="Container to stop \"62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:35:18.514759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9-rootfs.mount: Deactivated successfully. Dec 12 17:35:18.518029 systemd[1]: cri-containerd-1080e33973f1ff78d3d09e4f5c68661377375a659c13ae62974dd99189ac6507.scope: Deactivated successfully. Dec 12 17:35:18.520030 containerd[1530]: time="2025-12-12T17:35:18.519993888Z" level=info msg="received sandbox exit event container_id:\"1080e33973f1ff78d3d09e4f5c68661377375a659c13ae62974dd99189ac6507\" id:\"1080e33973f1ff78d3d09e4f5c68661377375a659c13ae62974dd99189ac6507\" exit_status:137 exited_at:{seconds:1765560918 nanos:519515964}" monitor_name=podsandbox Dec 12 17:35:18.526870 containerd[1530]: time="2025-12-12T17:35:18.526834336Z" level=info msg="StopContainer for \"c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9\" returns successfully" Dec 12 17:35:18.527316 containerd[1530]: time="2025-12-12T17:35:18.527291460Z" level=info msg="StopPodSandbox for \"71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c\"" Dec 12 17:35:18.527389 containerd[1530]: time="2025-12-12T17:35:18.527373740Z" level=info msg="Container to stop \"57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:35:18.527418 containerd[1530]: time="2025-12-12T17:35:18.527394100Z" level=info msg="Container to stop \"db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:35:18.527418 containerd[1530]: time="2025-12-12T17:35:18.527413381Z" level=info msg="Container to stop \"8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:35:18.527476 containerd[1530]: time="2025-12-12T17:35:18.527423701Z" level=info msg="Container to stop \"c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:35:18.527476 containerd[1530]: time="2025-12-12T17:35:18.527433581Z" level=info msg="Container to stop \"b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:35:18.533703 systemd[1]: cri-containerd-71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c.scope: Deactivated successfully. Dec 12 17:35:18.535397 containerd[1530]: time="2025-12-12T17:35:18.535356757Z" level=info msg="received sandbox exit event container_id:\"71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c\" id:\"71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c\" exit_status:137 exited_at:{seconds:1765560918 nanos:534841834}" monitor_name=podsandbox Dec 12 17:35:18.548881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1080e33973f1ff78d3d09e4f5c68661377375a659c13ae62974dd99189ac6507-rootfs.mount: Deactivated successfully. Dec 12 17:35:18.554925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c-rootfs.mount: Deactivated successfully. Dec 12 17:35:18.562676 containerd[1530]: time="2025-12-12T17:35:18.562635992Z" level=info msg="shim disconnected" id=1080e33973f1ff78d3d09e4f5c68661377375a659c13ae62974dd99189ac6507 namespace=k8s.io Dec 12 17:35:18.562837 containerd[1530]: time="2025-12-12T17:35:18.562671592Z" level=warning msg="cleaning up after shim disconnected" id=1080e33973f1ff78d3d09e4f5c68661377375a659c13ae62974dd99189ac6507 namespace=k8s.io Dec 12 17:35:18.562837 containerd[1530]: time="2025-12-12T17:35:18.562699992Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 17:35:18.568867 containerd[1530]: time="2025-12-12T17:35:18.568768236Z" level=info msg="shim disconnected" id=71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c namespace=k8s.io Dec 12 17:35:18.569495 containerd[1530]: time="2025-12-12T17:35:18.568809756Z" level=warning msg="cleaning up after shim disconnected" id=71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c namespace=k8s.io Dec 12 17:35:18.569551 containerd[1530]: time="2025-12-12T17:35:18.569494561Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 17:35:18.595784 containerd[1530]: time="2025-12-12T17:35:18.595602107Z" level=info msg="TearDown network for sandbox \"1080e33973f1ff78d3d09e4f5c68661377375a659c13ae62974dd99189ac6507\" successfully" Dec 12 17:35:18.595784 containerd[1530]: time="2025-12-12T17:35:18.595636027Z" level=info msg="StopPodSandbox for \"1080e33973f1ff78d3d09e4f5c68661377375a659c13ae62974dd99189ac6507\" returns successfully" Dec 12 17:35:18.596025 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c-shm.mount: Deactivated successfully. Dec 12 17:35:18.596129 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1080e33973f1ff78d3d09e4f5c68661377375a659c13ae62974dd99189ac6507-shm.mount: Deactivated successfully. Dec 12 17:35:18.596218 containerd[1530]: time="2025-12-12T17:35:18.596132991Z" level=info msg="TearDown network for sandbox \"71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c\" successfully" Dec 12 17:35:18.596218 containerd[1530]: time="2025-12-12T17:35:18.596153511Z" level=info msg="StopPodSandbox for \"71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c\" returns successfully" Dec 12 17:35:18.600430 containerd[1530]: time="2025-12-12T17:35:18.600391221Z" level=info msg="received sandbox container exit event sandbox_id:\"71746ee5ea3b922f7347ebeb1283ed20f05708f50fc28aefdb9d294324dde93c\" exit_status:137 exited_at:{seconds:1765560918 nanos:534841834}" monitor_name=criService Dec 12 17:35:18.601304 containerd[1530]: time="2025-12-12T17:35:18.600842505Z" level=info msg="received sandbox container exit event sandbox_id:\"1080e33973f1ff78d3d09e4f5c68661377375a659c13ae62974dd99189ac6507\" exit_status:137 exited_at:{seconds:1765560918 nanos:519515964}" monitor_name=criService Dec 12 17:35:18.618147 kubelet[2679]: I1212 17:35:18.618099 2679 scope.go:117] "RemoveContainer" containerID="c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9" Dec 12 17:35:18.620277 containerd[1530]: time="2025-12-12T17:35:18.620243003Z" level=info msg="RemoveContainer for \"c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9\"" Dec 12 17:35:18.627997 containerd[1530]: time="2025-12-12T17:35:18.627943538Z" level=info msg="RemoveContainer for \"c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9\" returns successfully" Dec 12 17:35:18.628694 kubelet[2679]: I1212 17:35:18.628639 2679 scope.go:117] "RemoveContainer" containerID="8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53" Dec 12 17:35:18.630655 containerd[1530]: time="2025-12-12T17:35:18.630536916Z" level=info msg="RemoveContainer for \"8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53\"" Dec 12 17:35:18.634242 containerd[1530]: time="2025-12-12T17:35:18.634210343Z" level=info msg="RemoveContainer for \"8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53\" returns successfully" Dec 12 17:35:18.634400 kubelet[2679]: I1212 17:35:18.634376 2679 scope.go:117] "RemoveContainer" containerID="db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6" Dec 12 17:35:18.636807 containerd[1530]: time="2025-12-12T17:35:18.636680040Z" level=info msg="RemoveContainer for \"db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6\"" Dec 12 17:35:18.640871 containerd[1530]: time="2025-12-12T17:35:18.640815790Z" level=info msg="RemoveContainer for \"db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6\" returns successfully" Dec 12 17:35:18.641117 kubelet[2679]: I1212 17:35:18.641099 2679 scope.go:117] "RemoveContainer" containerID="57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3" Dec 12 17:35:18.642472 containerd[1530]: time="2025-12-12T17:35:18.642369361Z" level=info msg="RemoveContainer for \"57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3\"" Dec 12 17:35:18.645234 containerd[1530]: time="2025-12-12T17:35:18.645203461Z" level=info msg="RemoveContainer for \"57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3\" returns successfully" Dec 12 17:35:18.645409 kubelet[2679]: I1212 17:35:18.645386 2679 scope.go:117] "RemoveContainer" containerID="b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa" Dec 12 17:35:18.646966 containerd[1530]: time="2025-12-12T17:35:18.646938153Z" level=info msg="RemoveContainer for \"b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa\"" Dec 12 17:35:18.650215 containerd[1530]: time="2025-12-12T17:35:18.650155776Z" level=info msg="RemoveContainer for \"b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa\" returns successfully" Dec 12 17:35:18.650431 kubelet[2679]: I1212 17:35:18.650413 2679 scope.go:117] "RemoveContainer" containerID="c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9" Dec 12 17:35:18.650686 containerd[1530]: time="2025-12-12T17:35:18.650614860Z" level=error msg="ContainerStatus for \"c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9\": not found" Dec 12 17:35:18.653241 kubelet[2679]: E1212 17:35:18.653188 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9\": not found" containerID="c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9" Dec 12 17:35:18.653289 kubelet[2679]: I1212 17:35:18.653246 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9"} err="failed to get container status \"c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9\": rpc error: code = NotFound desc = an error occurred when try to find container \"c5a453e67284c41770022ef310fc723d6eda91de86b9a7f2169bccf5590203c9\": not found" Dec 12 17:35:18.653332 kubelet[2679]: I1212 17:35:18.653289 2679 scope.go:117] "RemoveContainer" containerID="8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53" Dec 12 17:35:18.653603 containerd[1530]: time="2025-12-12T17:35:18.653547641Z" level=error msg="ContainerStatus for \"8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53\": not found" Dec 12 17:35:18.653707 kubelet[2679]: E1212 17:35:18.653687 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53\": not found" containerID="8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53" Dec 12 17:35:18.653752 kubelet[2679]: I1212 17:35:18.653719 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53"} err="failed to get container status \"8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53\": rpc error: code = NotFound desc = an error occurred when try to find container \"8bb4faefd1a7dd2bffdf7c2fa8ea7d76965f20e35c2219743c820eeb40e30e53\": not found" Dec 12 17:35:18.653752 kubelet[2679]: I1212 17:35:18.653733 2679 scope.go:117] "RemoveContainer" containerID="db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6" Dec 12 17:35:18.654054 containerd[1530]: time="2025-12-12T17:35:18.653966804Z" level=error msg="ContainerStatus for \"db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6\": not found" Dec 12 17:35:18.654140 kubelet[2679]: E1212 17:35:18.654115 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6\": not found" containerID="db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6" Dec 12 17:35:18.654178 kubelet[2679]: I1212 17:35:18.654158 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6"} err="failed to get container status \"db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6\": rpc error: code = NotFound desc = an error occurred when try to find container \"db63dc2e84fd164ddd8175d5d938276de96909a7c1774a1d813e052dc0a0c0a6\": not found" Dec 12 17:35:18.654202 kubelet[2679]: I1212 17:35:18.654177 2679 scope.go:117] "RemoveContainer" containerID="57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3" Dec 12 17:35:18.654397 containerd[1530]: time="2025-12-12T17:35:18.654338526Z" level=error msg="ContainerStatus for \"57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3\": not found" Dec 12 17:35:18.654463 kubelet[2679]: E1212 17:35:18.654437 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3\": not found" containerID="57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3" Dec 12 17:35:18.654502 kubelet[2679]: I1212 17:35:18.654469 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3"} err="failed to get container status \"57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"57ee13c50afbd85cd9c8a554a51df9749a939dbd657ba03274d9effe789c14e3\": not found" Dec 12 17:35:18.654502 kubelet[2679]: I1212 17:35:18.654480 2679 scope.go:117] "RemoveContainer" containerID="b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa" Dec 12 17:35:18.654634 containerd[1530]: time="2025-12-12T17:35:18.654605328Z" level=error msg="ContainerStatus for \"b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa\": not found" Dec 12 17:35:18.654741 kubelet[2679]: E1212 17:35:18.654724 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa\": not found" containerID="b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa" Dec 12 17:35:18.654781 kubelet[2679]: I1212 17:35:18.654747 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa"} err="failed to get container status \"b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"b8ac2e870718456476efe67a3e79204110102ddb99e1dd437bd0c7bd43e202aa\": not found" Dec 12 17:35:18.654781 kubelet[2679]: I1212 17:35:18.654761 2679 scope.go:117] "RemoveContainer" containerID="62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2" Dec 12 17:35:18.656193 containerd[1530]: time="2025-12-12T17:35:18.656168779Z" level=info msg="RemoveContainer for \"62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2\"" Dec 12 17:35:18.659234 containerd[1530]: time="2025-12-12T17:35:18.659138920Z" level=info msg="RemoveContainer for \"62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2\" returns successfully" Dec 12 17:35:18.659339 kubelet[2679]: I1212 17:35:18.659313 2679 scope.go:117] "RemoveContainer" containerID="62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2" Dec 12 17:35:18.659591 containerd[1530]: time="2025-12-12T17:35:18.659562563Z" level=error msg="ContainerStatus for \"62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2\": not found" Dec 12 17:35:18.659718 kubelet[2679]: E1212 17:35:18.659690 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2\": not found" containerID="62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2" Dec 12 17:35:18.659761 kubelet[2679]: I1212 17:35:18.659741 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2"} err="failed to get container status \"62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"62c3060ff0647b67949eefd92db6a8e20d3831278160d2e8cc81fc8eb2b150e2\": not found" Dec 12 17:35:18.710275 kubelet[2679]: I1212 17:35:18.710152 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cktzk\" (UniqueName: \"kubernetes.io/projected/1d6f458e-a5a8-4b88-98f1-e0f9460b56a0-kube-api-access-cktzk\") pod \"1d6f458e-a5a8-4b88-98f1-e0f9460b56a0\" (UID: \"1d6f458e-a5a8-4b88-98f1-e0f9460b56a0\") " Dec 12 17:35:18.710275 kubelet[2679]: I1212 17:35:18.710196 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6m8x\" (UniqueName: \"kubernetes.io/projected/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-kube-api-access-g6m8x\") pod \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " Dec 12 17:35:18.710275 kubelet[2679]: I1212 17:35:18.710216 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-clustermesh-secrets\") pod \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " Dec 12 17:35:18.710275 kubelet[2679]: I1212 17:35:18.710231 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-lib-modules\") pod \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " Dec 12 17:35:18.710275 kubelet[2679]: I1212 17:35:18.710268 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-etc-cni-netd\") pod \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " Dec 12 17:35:18.710489 kubelet[2679]: I1212 17:35:18.710285 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-xtables-lock\") pod \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " Dec 12 17:35:18.710489 kubelet[2679]: I1212 17:35:18.710304 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d6f458e-a5a8-4b88-98f1-e0f9460b56a0-cilium-config-path\") pod \"1d6f458e-a5a8-4b88-98f1-e0f9460b56a0\" (UID: \"1d6f458e-a5a8-4b88-98f1-e0f9460b56a0\") " Dec 12 17:35:18.710489 kubelet[2679]: I1212 17:35:18.710319 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-cilium-run\") pod \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " Dec 12 17:35:18.710489 kubelet[2679]: I1212 17:35:18.710334 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-host-proc-sys-net\") pod \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " Dec 12 17:35:18.710489 kubelet[2679]: I1212 17:35:18.710351 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-hubble-tls\") pod \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " Dec 12 17:35:18.710489 kubelet[2679]: I1212 17:35:18.710367 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-cni-path\") pod \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " Dec 12 17:35:18.710611 kubelet[2679]: I1212 17:35:18.710383 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-cilium-config-path\") pod \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " Dec 12 17:35:18.710611 kubelet[2679]: I1212 17:35:18.710397 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-bpf-maps\") pod \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " Dec 12 17:35:18.710611 kubelet[2679]: I1212 17:35:18.710414 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-cilium-cgroup\") pod \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " Dec 12 17:35:18.710611 kubelet[2679]: I1212 17:35:18.710555 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-host-proc-sys-kernel\") pod \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " Dec 12 17:35:18.710611 kubelet[2679]: I1212 17:35:18.710578 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-hostproc\") pod \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\" (UID: \"03a8a009-c38d-4c0d-b25c-98d1cef4c24b\") " Dec 12 17:35:18.716469 kubelet[2679]: I1212 17:35:18.715182 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-hostproc" (OuterVolumeSpecName: "hostproc") pod "03a8a009-c38d-4c0d-b25c-98d1cef4c24b" (UID: "03a8a009-c38d-4c0d-b25c-98d1cef4c24b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:35:18.716469 kubelet[2679]: I1212 17:35:18.715210 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "03a8a009-c38d-4c0d-b25c-98d1cef4c24b" (UID: "03a8a009-c38d-4c0d-b25c-98d1cef4c24b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:35:18.716469 kubelet[2679]: I1212 17:35:18.715249 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "03a8a009-c38d-4c0d-b25c-98d1cef4c24b" (UID: "03a8a009-c38d-4c0d-b25c-98d1cef4c24b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:35:18.717254 kubelet[2679]: I1212 17:35:18.717203 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "03a8a009-c38d-4c0d-b25c-98d1cef4c24b" (UID: "03a8a009-c38d-4c0d-b25c-98d1cef4c24b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:35:18.717316 kubelet[2679]: I1212 17:35:18.717260 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "03a8a009-c38d-4c0d-b25c-98d1cef4c24b" (UID: "03a8a009-c38d-4c0d-b25c-98d1cef4c24b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:35:18.717316 kubelet[2679]: I1212 17:35:18.717278 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "03a8a009-c38d-4c0d-b25c-98d1cef4c24b" (UID: "03a8a009-c38d-4c0d-b25c-98d1cef4c24b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:35:18.717316 kubelet[2679]: I1212 17:35:18.717297 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "03a8a009-c38d-4c0d-b25c-98d1cef4c24b" (UID: "03a8a009-c38d-4c0d-b25c-98d1cef4c24b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:35:18.717316 kubelet[2679]: I1212 17:35:18.717312 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "03a8a009-c38d-4c0d-b25c-98d1cef4c24b" (UID: "03a8a009-c38d-4c0d-b25c-98d1cef4c24b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:35:18.724729 kubelet[2679]: I1212 17:35:18.715183 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-cni-path" (OuterVolumeSpecName: "cni-path") pod "03a8a009-c38d-4c0d-b25c-98d1cef4c24b" (UID: "03a8a009-c38d-4c0d-b25c-98d1cef4c24b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:35:18.724992 kubelet[2679]: I1212 17:35:18.724922 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "03a8a009-c38d-4c0d-b25c-98d1cef4c24b" (UID: "03a8a009-c38d-4c0d-b25c-98d1cef4c24b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:35:18.728490 kubelet[2679]: I1212 17:35:18.727362 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "03a8a009-c38d-4c0d-b25c-98d1cef4c24b" (UID: "03a8a009-c38d-4c0d-b25c-98d1cef4c24b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:35:18.732207 kubelet[2679]: I1212 17:35:18.732161 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d6f458e-a5a8-4b88-98f1-e0f9460b56a0-kube-api-access-cktzk" (OuterVolumeSpecName: "kube-api-access-cktzk") pod "1d6f458e-a5a8-4b88-98f1-e0f9460b56a0" (UID: "1d6f458e-a5a8-4b88-98f1-e0f9460b56a0"). InnerVolumeSpecName "kube-api-access-cktzk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:35:18.732594 kubelet[2679]: I1212 17:35:18.732557 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-kube-api-access-g6m8x" (OuterVolumeSpecName: "kube-api-access-g6m8x") pod "03a8a009-c38d-4c0d-b25c-98d1cef4c24b" (UID: "03a8a009-c38d-4c0d-b25c-98d1cef4c24b"). InnerVolumeSpecName "kube-api-access-g6m8x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:35:18.733675 kubelet[2679]: I1212 17:35:18.733631 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "03a8a009-c38d-4c0d-b25c-98d1cef4c24b" (UID: "03a8a009-c38d-4c0d-b25c-98d1cef4c24b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:35:18.736138 kubelet[2679]: I1212 17:35:18.736070 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d6f458e-a5a8-4b88-98f1-e0f9460b56a0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1d6f458e-a5a8-4b88-98f1-e0f9460b56a0" (UID: "1d6f458e-a5a8-4b88-98f1-e0f9460b56a0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:35:18.736898 kubelet[2679]: I1212 17:35:18.736827 2679 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "03a8a009-c38d-4c0d-b25c-98d1cef4c24b" (UID: "03a8a009-c38d-4c0d-b25c-98d1cef4c24b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:35:18.813813 kubelet[2679]: I1212 17:35:18.813753 2679 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 12 17:35:18.813813 kubelet[2679]: I1212 17:35:18.813790 2679 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 12 17:35:18.813813 kubelet[2679]: I1212 17:35:18.813805 2679 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 12 17:35:18.813813 kubelet[2679]: I1212 17:35:18.813818 2679 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 12 17:35:18.813813 kubelet[2679]: I1212 17:35:18.813827 2679 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d6f458e-a5a8-4b88-98f1-e0f9460b56a0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 12 17:35:18.814037 kubelet[2679]: I1212 17:35:18.813834 2679 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 12 17:35:18.814037 kubelet[2679]: I1212 17:35:18.813843 2679 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 12 17:35:18.814037 kubelet[2679]: I1212 17:35:18.813850 2679 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 12 17:35:18.814037 kubelet[2679]: I1212 17:35:18.813857 2679 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 12 17:35:18.814037 kubelet[2679]: I1212 17:35:18.813865 2679 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 12 17:35:18.814037 kubelet[2679]: I1212 17:35:18.813872 2679 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 12 17:35:18.814037 kubelet[2679]: I1212 17:35:18.813881 2679 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 12 17:35:18.814037 kubelet[2679]: I1212 17:35:18.813890 2679 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 12 17:35:18.814198 kubelet[2679]: I1212 17:35:18.813897 2679 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 12 17:35:18.814198 kubelet[2679]: I1212 17:35:18.813904 2679 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cktzk\" (UniqueName: \"kubernetes.io/projected/1d6f458e-a5a8-4b88-98f1-e0f9460b56a0-kube-api-access-cktzk\") on node \"localhost\" DevicePath \"\"" Dec 12 17:35:18.814198 kubelet[2679]: I1212 17:35:18.813912 2679 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g6m8x\" (UniqueName: \"kubernetes.io/projected/03a8a009-c38d-4c0d-b25c-98d1cef4c24b-kube-api-access-g6m8x\") on node \"localhost\" DevicePath \"\"" Dec 12 17:35:18.921360 systemd[1]: Removed slice kubepods-besteffort-pod1d6f458e_a5a8_4b88_98f1_e0f9460b56a0.slice - libcontainer container kubepods-besteffort-pod1d6f458e_a5a8_4b88_98f1_e0f9460b56a0.slice. Dec 12 17:35:19.371628 kubelet[2679]: I1212 17:35:19.371562 2679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d6f458e-a5a8-4b88-98f1-e0f9460b56a0" path="/var/lib/kubelet/pods/1d6f458e-a5a8-4b88-98f1-e0f9460b56a0/volumes" Dec 12 17:35:19.376593 systemd[1]: Removed slice kubepods-burstable-pod03a8a009_c38d_4c0d_b25c_98d1cef4c24b.slice - libcontainer container kubepods-burstable-pod03a8a009_c38d_4c0d_b25c_98d1cef4c24b.slice. Dec 12 17:35:19.376710 systemd[1]: kubepods-burstable-pod03a8a009_c38d_4c0d_b25c_98d1cef4c24b.slice: Consumed 6.513s CPU time, 121.7M memory peak, 140K read from disk, 12.9M written to disk. Dec 12 17:35:19.412040 kubelet[2679]: E1212 17:35:19.411980 2679 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 12 17:35:19.482125 systemd[1]: var-lib-kubelet-pods-03a8a009\x2dc38d\x2d4c0d\x2db25c\x2d98d1cef4c24b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg6m8x.mount: Deactivated successfully. Dec 12 17:35:19.482225 systemd[1]: var-lib-kubelet-pods-03a8a009\x2dc38d\x2d4c0d\x2db25c\x2d98d1cef4c24b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 12 17:35:19.482280 systemd[1]: var-lib-kubelet-pods-03a8a009\x2dc38d\x2d4c0d\x2db25c\x2d98d1cef4c24b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 12 17:35:19.482335 systemd[1]: var-lib-kubelet-pods-1d6f458e\x2da5a8\x2d4b88\x2d98f1\x2de0f9460b56a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcktzk.mount: Deactivated successfully. Dec 12 17:35:20.372583 sshd[4294]: Connection closed by 10.0.0.1 port 42910 Dec 12 17:35:20.373180 sshd-session[4291]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:20.383802 systemd[1]: sshd@21-10.0.0.71:22-10.0.0.1:42910.service: Deactivated successfully. Dec 12 17:35:20.386090 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 17:35:20.386358 systemd[1]: session-22.scope: Consumed 1.216s CPU time, 25.2M memory peak. Dec 12 17:35:20.387096 systemd-logind[1510]: Session 22 logged out. Waiting for processes to exit. Dec 12 17:35:20.388940 systemd-logind[1510]: Removed session 22. Dec 12 17:35:20.390254 systemd[1]: Started sshd@22-10.0.0.71:22-10.0.0.1:42924.service - OpenSSH per-connection server daemon (10.0.0.1:42924). Dec 12 17:35:20.457142 kubelet[2679]: I1212 17:35:20.456961 2679 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T17:35:20Z","lastTransitionTime":"2025-12-12T17:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 12 17:35:20.459783 sshd[4437]: Accepted publickey for core from 10.0.0.1 port 42924 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:35:20.460490 sshd-session[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:20.467462 systemd-logind[1510]: New session 23 of user core. Dec 12 17:35:20.472646 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 17:35:21.375223 kubelet[2679]: I1212 17:35:21.375051 2679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03a8a009-c38d-4c0d-b25c-98d1cef4c24b" path="/var/lib/kubelet/pods/03a8a009-c38d-4c0d-b25c-98d1cef4c24b/volumes" Dec 12 17:35:21.642506 sshd[4440]: Connection closed by 10.0.0.1 port 42924 Dec 12 17:35:21.642825 sshd-session[4437]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:21.657667 systemd[1]: sshd@22-10.0.0.71:22-10.0.0.1:42924.service: Deactivated successfully. Dec 12 17:35:21.665213 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 17:35:21.666223 systemd[1]: session-23.scope: Consumed 1.059s CPU time, 24.1M memory peak. Dec 12 17:35:21.668062 systemd-logind[1510]: Session 23 logged out. Waiting for processes to exit. Dec 12 17:35:21.676683 systemd-logind[1510]: Removed session 23. Dec 12 17:35:21.682776 systemd[1]: Started sshd@23-10.0.0.71:22-10.0.0.1:44258.service - OpenSSH per-connection server daemon (10.0.0.1:44258). Dec 12 17:35:21.692744 systemd[1]: Created slice kubepods-burstable-pod27369599_e7c4_48a3_9e47_8c2208d074a3.slice - libcontainer container kubepods-burstable-pod27369599_e7c4_48a3_9e47_8c2208d074a3.slice. Dec 12 17:35:21.752479 sshd[4452]: Accepted publickey for core from 10.0.0.1 port 44258 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:35:21.754372 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:21.759463 systemd-logind[1510]: New session 24 of user core. Dec 12 17:35:21.772657 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 17:35:21.824484 sshd[4455]: Connection closed by 10.0.0.1 port 44258 Dec 12 17:35:21.824980 sshd-session[4452]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:21.832526 kubelet[2679]: I1212 17:35:21.832487 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/27369599-e7c4-48a3-9e47-8c2208d074a3-host-proc-sys-kernel\") pod \"cilium-xqdw4\" (UID: \"27369599-e7c4-48a3-9e47-8c2208d074a3\") " pod="kube-system/cilium-xqdw4" Dec 12 17:35:21.832526 kubelet[2679]: I1212 17:35:21.832534 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27369599-e7c4-48a3-9e47-8c2208d074a3-cilium-config-path\") pod \"cilium-xqdw4\" (UID: \"27369599-e7c4-48a3-9e47-8c2208d074a3\") " pod="kube-system/cilium-xqdw4" Dec 12 17:35:21.832526 kubelet[2679]: I1212 17:35:21.832566 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/27369599-e7c4-48a3-9e47-8c2208d074a3-cilium-ipsec-secrets\") pod \"cilium-xqdw4\" (UID: \"27369599-e7c4-48a3-9e47-8c2208d074a3\") " pod="kube-system/cilium-xqdw4" Dec 12 17:35:21.833072 kubelet[2679]: I1212 17:35:21.832606 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/27369599-e7c4-48a3-9e47-8c2208d074a3-cni-path\") pod \"cilium-xqdw4\" (UID: \"27369599-e7c4-48a3-9e47-8c2208d074a3\") " pod="kube-system/cilium-xqdw4" Dec 12 17:35:21.833072 kubelet[2679]: I1212 17:35:21.832626 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/27369599-e7c4-48a3-9e47-8c2208d074a3-etc-cni-netd\") pod \"cilium-xqdw4\" (UID: \"27369599-e7c4-48a3-9e47-8c2208d074a3\") " pod="kube-system/cilium-xqdw4" Dec 12 17:35:21.833072 kubelet[2679]: I1212 17:35:21.832686 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27369599-e7c4-48a3-9e47-8c2208d074a3-lib-modules\") pod \"cilium-xqdw4\" (UID: \"27369599-e7c4-48a3-9e47-8c2208d074a3\") " pod="kube-system/cilium-xqdw4" Dec 12 17:35:21.833072 kubelet[2679]: I1212 17:35:21.832729 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27369599-e7c4-48a3-9e47-8c2208d074a3-xtables-lock\") pod \"cilium-xqdw4\" (UID: \"27369599-e7c4-48a3-9e47-8c2208d074a3\") " pod="kube-system/cilium-xqdw4" Dec 12 17:35:21.833072 kubelet[2679]: I1212 17:35:21.832776 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/27369599-e7c4-48a3-9e47-8c2208d074a3-cilium-run\") pod \"cilium-xqdw4\" (UID: \"27369599-e7c4-48a3-9e47-8c2208d074a3\") " pod="kube-system/cilium-xqdw4" Dec 12 17:35:21.833072 kubelet[2679]: I1212 17:35:21.832813 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/27369599-e7c4-48a3-9e47-8c2208d074a3-bpf-maps\") pod \"cilium-xqdw4\" (UID: \"27369599-e7c4-48a3-9e47-8c2208d074a3\") " pod="kube-system/cilium-xqdw4" Dec 12 17:35:21.833219 kubelet[2679]: I1212 17:35:21.832833 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/27369599-e7c4-48a3-9e47-8c2208d074a3-cilium-cgroup\") pod \"cilium-xqdw4\" (UID: \"27369599-e7c4-48a3-9e47-8c2208d074a3\") " pod="kube-system/cilium-xqdw4" Dec 12 17:35:21.833219 kubelet[2679]: I1212 17:35:21.832850 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zj5k\" (UniqueName: \"kubernetes.io/projected/27369599-e7c4-48a3-9e47-8c2208d074a3-kube-api-access-2zj5k\") pod \"cilium-xqdw4\" (UID: \"27369599-e7c4-48a3-9e47-8c2208d074a3\") " pod="kube-system/cilium-xqdw4" Dec 12 17:35:21.833219 kubelet[2679]: I1212 17:35:21.832874 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/27369599-e7c4-48a3-9e47-8c2208d074a3-clustermesh-secrets\") pod \"cilium-xqdw4\" (UID: \"27369599-e7c4-48a3-9e47-8c2208d074a3\") " pod="kube-system/cilium-xqdw4" Dec 12 17:35:21.833219 kubelet[2679]: I1212 17:35:21.832892 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/27369599-e7c4-48a3-9e47-8c2208d074a3-hubble-tls\") pod \"cilium-xqdw4\" (UID: \"27369599-e7c4-48a3-9e47-8c2208d074a3\") " pod="kube-system/cilium-xqdw4" Dec 12 17:35:21.833219 kubelet[2679]: I1212 17:35:21.832907 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/27369599-e7c4-48a3-9e47-8c2208d074a3-host-proc-sys-net\") pod \"cilium-xqdw4\" (UID: \"27369599-e7c4-48a3-9e47-8c2208d074a3\") " pod="kube-system/cilium-xqdw4" Dec 12 17:35:21.833219 kubelet[2679]: I1212 17:35:21.832922 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/27369599-e7c4-48a3-9e47-8c2208d074a3-hostproc\") pod \"cilium-xqdw4\" (UID: \"27369599-e7c4-48a3-9e47-8c2208d074a3\") " pod="kube-system/cilium-xqdw4" Dec 12 17:35:21.841863 systemd[1]: sshd@23-10.0.0.71:22-10.0.0.1:44258.service: Deactivated successfully. Dec 12 17:35:21.843690 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 17:35:21.844363 systemd-logind[1510]: Session 24 logged out. Waiting for processes to exit. Dec 12 17:35:21.846750 systemd[1]: Started sshd@24-10.0.0.71:22-10.0.0.1:44270.service - OpenSSH per-connection server daemon (10.0.0.1:44270). Dec 12 17:35:21.850075 systemd-logind[1510]: Removed session 24. Dec 12 17:35:21.903982 sshd[4462]: Accepted publickey for core from 10.0.0.1 port 44270 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:35:21.905500 sshd-session[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:21.910494 systemd-logind[1510]: New session 25 of user core. Dec 12 17:35:21.924655 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 17:35:22.005483 containerd[1530]: time="2025-12-12T17:35:22.005366364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xqdw4,Uid:27369599-e7c4-48a3-9e47-8c2208d074a3,Namespace:kube-system,Attempt:0,}" Dec 12 17:35:22.022919 containerd[1530]: time="2025-12-12T17:35:22.022638193Z" level=info msg="connecting to shim f6311d8a83e6d000a2ad1a9ee2d9ef4742afc951f4832a390543fe33eb1cf4a9" address="unix:///run/containerd/s/c04b0aa9bd65de2db165a363107c6769b81dc15e905f45ca4bedc4f2ed5fb454" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:35:22.051671 systemd[1]: Started cri-containerd-f6311d8a83e6d000a2ad1a9ee2d9ef4742afc951f4832a390543fe33eb1cf4a9.scope - libcontainer container f6311d8a83e6d000a2ad1a9ee2d9ef4742afc951f4832a390543fe33eb1cf4a9. Dec 12 17:35:22.076249 containerd[1530]: time="2025-12-12T17:35:22.076195049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xqdw4,Uid:27369599-e7c4-48a3-9e47-8c2208d074a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6311d8a83e6d000a2ad1a9ee2d9ef4742afc951f4832a390543fe33eb1cf4a9\"" Dec 12 17:35:22.081588 containerd[1530]: time="2025-12-12T17:35:22.081541323Z" level=info msg="CreateContainer within sandbox \"f6311d8a83e6d000a2ad1a9ee2d9ef4742afc951f4832a390543fe33eb1cf4a9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 17:35:22.087916 containerd[1530]: time="2025-12-12T17:35:22.087872843Z" level=info msg="Container c4219d3e51e8106ce5ed498a66a8f015a2b4967a54c7415696396b925d8ef14c: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:22.099012 containerd[1530]: time="2025-12-12T17:35:22.098952832Z" level=info msg="CreateContainer within sandbox \"f6311d8a83e6d000a2ad1a9ee2d9ef4742afc951f4832a390543fe33eb1cf4a9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c4219d3e51e8106ce5ed498a66a8f015a2b4967a54c7415696396b925d8ef14c\"" Dec 12 17:35:22.099791 containerd[1530]: time="2025-12-12T17:35:22.099554436Z" level=info msg="StartContainer for \"c4219d3e51e8106ce5ed498a66a8f015a2b4967a54c7415696396b925d8ef14c\"" Dec 12 17:35:22.101231 containerd[1530]: time="2025-12-12T17:35:22.101199686Z" level=info msg="connecting to shim c4219d3e51e8106ce5ed498a66a8f015a2b4967a54c7415696396b925d8ef14c" address="unix:///run/containerd/s/c04b0aa9bd65de2db165a363107c6769b81dc15e905f45ca4bedc4f2ed5fb454" protocol=ttrpc version=3 Dec 12 17:35:22.121659 systemd[1]: Started cri-containerd-c4219d3e51e8106ce5ed498a66a8f015a2b4967a54c7415696396b925d8ef14c.scope - libcontainer container c4219d3e51e8106ce5ed498a66a8f015a2b4967a54c7415696396b925d8ef14c. Dec 12 17:35:22.209822 systemd[1]: cri-containerd-c4219d3e51e8106ce5ed498a66a8f015a2b4967a54c7415696396b925d8ef14c.scope: Deactivated successfully. Dec 12 17:35:22.217537 containerd[1530]: time="2025-12-12T17:35:22.217496057Z" level=info msg="received container exit event container_id:\"c4219d3e51e8106ce5ed498a66a8f015a2b4967a54c7415696396b925d8ef14c\" id:\"c4219d3e51e8106ce5ed498a66a8f015a2b4967a54c7415696396b925d8ef14c\" pid:4537 exited_at:{seconds:1765560922 nanos:211035057}" Dec 12 17:35:22.218236 containerd[1530]: time="2025-12-12T17:35:22.218206782Z" level=info msg="StartContainer for \"c4219d3e51e8106ce5ed498a66a8f015a2b4967a54c7415696396b925d8ef14c\" returns successfully" Dec 12 17:35:22.638648 containerd[1530]: time="2025-12-12T17:35:22.638011500Z" level=info msg="CreateContainer within sandbox \"f6311d8a83e6d000a2ad1a9ee2d9ef4742afc951f4832a390543fe33eb1cf4a9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 17:35:22.648206 containerd[1530]: time="2025-12-12T17:35:22.648163764Z" level=info msg="Container 80c8fd112dcc83966aa2d69e70254cd2cfb65dffb2b7dc153d40b9876298a7bd: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:22.654969 containerd[1530]: time="2025-12-12T17:35:22.654916566Z" level=info msg="CreateContainer within sandbox \"f6311d8a83e6d000a2ad1a9ee2d9ef4742afc951f4832a390543fe33eb1cf4a9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"80c8fd112dcc83966aa2d69e70254cd2cfb65dffb2b7dc153d40b9876298a7bd\"" Dec 12 17:35:22.655474 containerd[1530]: time="2025-12-12T17:35:22.655424049Z" level=info msg="StartContainer for \"80c8fd112dcc83966aa2d69e70254cd2cfb65dffb2b7dc153d40b9876298a7bd\"" Dec 12 17:35:22.657406 containerd[1530]: time="2025-12-12T17:35:22.657369621Z" level=info msg="connecting to shim 80c8fd112dcc83966aa2d69e70254cd2cfb65dffb2b7dc153d40b9876298a7bd" address="unix:///run/containerd/s/c04b0aa9bd65de2db165a363107c6769b81dc15e905f45ca4bedc4f2ed5fb454" protocol=ttrpc version=3 Dec 12 17:35:22.675651 systemd[1]: Started cri-containerd-80c8fd112dcc83966aa2d69e70254cd2cfb65dffb2b7dc153d40b9876298a7bd.scope - libcontainer container 80c8fd112dcc83966aa2d69e70254cd2cfb65dffb2b7dc153d40b9876298a7bd. Dec 12 17:35:22.709827 containerd[1530]: time="2025-12-12T17:35:22.709783551Z" level=info msg="StartContainer for \"80c8fd112dcc83966aa2d69e70254cd2cfb65dffb2b7dc153d40b9876298a7bd\" returns successfully" Dec 12 17:35:22.716183 systemd[1]: cri-containerd-80c8fd112dcc83966aa2d69e70254cd2cfb65dffb2b7dc153d40b9876298a7bd.scope: Deactivated successfully. Dec 12 17:35:22.717107 containerd[1530]: time="2025-12-12T17:35:22.717067077Z" level=info msg="received container exit event container_id:\"80c8fd112dcc83966aa2d69e70254cd2cfb65dffb2b7dc153d40b9876298a7bd\" id:\"80c8fd112dcc83966aa2d69e70254cd2cfb65dffb2b7dc153d40b9876298a7bd\" pid:4582 exited_at:{seconds:1765560922 nanos:716858435}" Dec 12 17:35:23.659160 containerd[1530]: time="2025-12-12T17:35:23.659107668Z" level=info msg="CreateContainer within sandbox \"f6311d8a83e6d000a2ad1a9ee2d9ef4742afc951f4832a390543fe33eb1cf4a9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 17:35:23.670131 containerd[1530]: time="2025-12-12T17:35:23.670080295Z" level=info msg="Container 90310afc5321324c1bd694b413288b64380646512680edad186b8fa7e33a3a1d: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:23.693592 containerd[1530]: time="2025-12-12T17:35:23.693512197Z" level=info msg="CreateContainer within sandbox \"f6311d8a83e6d000a2ad1a9ee2d9ef4742afc951f4832a390543fe33eb1cf4a9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"90310afc5321324c1bd694b413288b64380646512680edad186b8fa7e33a3a1d\"" Dec 12 17:35:23.694335 containerd[1530]: time="2025-12-12T17:35:23.694298402Z" level=info msg="StartContainer for \"90310afc5321324c1bd694b413288b64380646512680edad186b8fa7e33a3a1d\"" Dec 12 17:35:23.696342 containerd[1530]: time="2025-12-12T17:35:23.696058253Z" level=info msg="connecting to shim 90310afc5321324c1bd694b413288b64380646512680edad186b8fa7e33a3a1d" address="unix:///run/containerd/s/c04b0aa9bd65de2db165a363107c6769b81dc15e905f45ca4bedc4f2ed5fb454" protocol=ttrpc version=3 Dec 12 17:35:23.728681 systemd[1]: Started cri-containerd-90310afc5321324c1bd694b413288b64380646512680edad186b8fa7e33a3a1d.scope - libcontainer container 90310afc5321324c1bd694b413288b64380646512680edad186b8fa7e33a3a1d. Dec 12 17:35:23.812371 systemd[1]: cri-containerd-90310afc5321324c1bd694b413288b64380646512680edad186b8fa7e33a3a1d.scope: Deactivated successfully. Dec 12 17:35:23.815359 containerd[1530]: time="2025-12-12T17:35:23.815287499Z" level=info msg="received container exit event container_id:\"90310afc5321324c1bd694b413288b64380646512680edad186b8fa7e33a3a1d\" id:\"90310afc5321324c1bd694b413288b64380646512680edad186b8fa7e33a3a1d\" pid:4626 exited_at:{seconds:1765560923 nanos:814713375}" Dec 12 17:35:23.816244 containerd[1530]: time="2025-12-12T17:35:23.816177144Z" level=info msg="StartContainer for \"90310afc5321324c1bd694b413288b64380646512680edad186b8fa7e33a3a1d\" returns successfully" Dec 12 17:35:23.841724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90310afc5321324c1bd694b413288b64380646512680edad186b8fa7e33a3a1d-rootfs.mount: Deactivated successfully. Dec 12 17:35:24.415099 kubelet[2679]: E1212 17:35:24.415037 2679 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 12 17:35:24.660523 containerd[1530]: time="2025-12-12T17:35:24.660097837Z" level=info msg="CreateContainer within sandbox \"f6311d8a83e6d000a2ad1a9ee2d9ef4742afc951f4832a390543fe33eb1cf4a9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 17:35:24.674858 containerd[1530]: time="2025-12-12T17:35:24.672326229Z" level=info msg="Container b669217f117be3397a528c710ed8bfb7e209a90dd75a31221486617afee1af65: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:24.684481 containerd[1530]: time="2025-12-12T17:35:24.683940257Z" level=info msg="CreateContainer within sandbox \"f6311d8a83e6d000a2ad1a9ee2d9ef4742afc951f4832a390543fe33eb1cf4a9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b669217f117be3397a528c710ed8bfb7e209a90dd75a31221486617afee1af65\"" Dec 12 17:35:24.685205 containerd[1530]: time="2025-12-12T17:35:24.685173905Z" level=info msg="StartContainer for \"b669217f117be3397a528c710ed8bfb7e209a90dd75a31221486617afee1af65\"" Dec 12 17:35:24.686305 containerd[1530]: time="2025-12-12T17:35:24.686278671Z" level=info msg="connecting to shim b669217f117be3397a528c710ed8bfb7e209a90dd75a31221486617afee1af65" address="unix:///run/containerd/s/c04b0aa9bd65de2db165a363107c6769b81dc15e905f45ca4bedc4f2ed5fb454" protocol=ttrpc version=3 Dec 12 17:35:24.709664 systemd[1]: Started cri-containerd-b669217f117be3397a528c710ed8bfb7e209a90dd75a31221486617afee1af65.scope - libcontainer container b669217f117be3397a528c710ed8bfb7e209a90dd75a31221486617afee1af65. Dec 12 17:35:24.735942 systemd[1]: cri-containerd-b669217f117be3397a528c710ed8bfb7e209a90dd75a31221486617afee1af65.scope: Deactivated successfully. Dec 12 17:35:24.759638 containerd[1530]: time="2025-12-12T17:35:24.759495703Z" level=info msg="received container exit event container_id:\"b669217f117be3397a528c710ed8bfb7e209a90dd75a31221486617afee1af65\" id:\"b669217f117be3397a528c710ed8bfb7e209a90dd75a31221486617afee1af65\" pid:4665 exited_at:{seconds:1765560924 nanos:738394059}" Dec 12 17:35:24.766921 containerd[1530]: time="2025-12-12T17:35:24.766846706Z" level=info msg="StartContainer for \"b669217f117be3397a528c710ed8bfb7e209a90dd75a31221486617afee1af65\" returns successfully" Dec 12 17:35:24.781616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b669217f117be3397a528c710ed8bfb7e209a90dd75a31221486617afee1af65-rootfs.mount: Deactivated successfully. Dec 12 17:35:25.664787 containerd[1530]: time="2025-12-12T17:35:25.664726440Z" level=info msg="CreateContainer within sandbox \"f6311d8a83e6d000a2ad1a9ee2d9ef4742afc951f4832a390543fe33eb1cf4a9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 17:35:25.678628 containerd[1530]: time="2025-12-12T17:35:25.678588719Z" level=info msg="Container 82d9c7f9465c16172a19ead7cf8d70b0289fa63fb803bdc120dd52fa595fd849: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:25.684385 containerd[1530]: time="2025-12-12T17:35:25.684343192Z" level=info msg="CreateContainer within sandbox \"f6311d8a83e6d000a2ad1a9ee2d9ef4742afc951f4832a390543fe33eb1cf4a9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"82d9c7f9465c16172a19ead7cf8d70b0289fa63fb803bdc120dd52fa595fd849\"" Dec 12 17:35:25.684810 containerd[1530]: time="2025-12-12T17:35:25.684786634Z" level=info msg="StartContainer for \"82d9c7f9465c16172a19ead7cf8d70b0289fa63fb803bdc120dd52fa595fd849\"" Dec 12 17:35:25.687070 containerd[1530]: time="2025-12-12T17:35:25.687030807Z" level=info msg="connecting to shim 82d9c7f9465c16172a19ead7cf8d70b0289fa63fb803bdc120dd52fa595fd849" address="unix:///run/containerd/s/c04b0aa9bd65de2db165a363107c6769b81dc15e905f45ca4bedc4f2ed5fb454" protocol=ttrpc version=3 Dec 12 17:35:25.711630 systemd[1]: Started cri-containerd-82d9c7f9465c16172a19ead7cf8d70b0289fa63fb803bdc120dd52fa595fd849.scope - libcontainer container 82d9c7f9465c16172a19ead7cf8d70b0289fa63fb803bdc120dd52fa595fd849. Dec 12 17:35:25.751966 containerd[1530]: time="2025-12-12T17:35:25.751929258Z" level=info msg="StartContainer for \"82d9c7f9465c16172a19ead7cf8d70b0289fa63fb803bdc120dd52fa595fd849\" returns successfully" Dec 12 17:35:26.035517 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 12 17:35:28.960885 systemd-networkd[1439]: lxc_health: Link UP Dec 12 17:35:28.970957 systemd-networkd[1439]: lxc_health: Gained carrier Dec 12 17:35:30.031222 kubelet[2679]: I1212 17:35:30.031139 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xqdw4" podStartSLOduration=9.031124951 podStartE2EDuration="9.031124951s" podCreationTimestamp="2025-12-12 17:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:35:26.680584722 +0000 UTC m=+77.431744119" watchObservedRunningTime="2025-12-12 17:35:30.031124951 +0000 UTC m=+80.782284348" Dec 12 17:35:30.450433 kubelet[2679]: E1212 17:35:30.450304 2679 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:55744->127.0.0.1:39695: write tcp 127.0.0.1:55744->127.0.0.1:39695: write: connection reset by peer Dec 12 17:35:30.632691 systemd-networkd[1439]: lxc_health: Gained IPv6LL Dec 12 17:35:34.722394 sshd[4465]: Connection closed by 10.0.0.1 port 44270 Dec 12 17:35:34.723163 sshd-session[4462]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:34.727029 systemd[1]: sshd@24-10.0.0.71:22-10.0.0.1:44270.service: Deactivated successfully. Dec 12 17:35:34.728948 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 17:35:34.731167 systemd-logind[1510]: Session 25 logged out. Waiting for processes to exit. Dec 12 17:35:34.732706 systemd-logind[1510]: Removed session 25.