Dec 12 17:28:46.791369 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 12 17:28:46.791390 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 12 17:28:46.791400 kernel: KASLR enabled Dec 12 17:28:46.791406 kernel: efi: EFI v2.7 by EDK II Dec 12 17:28:46.791411 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Dec 12 17:28:46.791417 kernel: random: crng init done Dec 12 17:28:46.791424 kernel: secureboot: Secure boot disabled Dec 12 17:28:46.791429 kernel: ACPI: Early table checksum verification disabled Dec 12 17:28:46.791435 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Dec 12 17:28:46.791442 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 12 17:28:46.791448 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:28:46.791454 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:28:46.791459 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:28:46.791465 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:28:46.791472 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:28:46.791480 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:28:46.791486 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:28:46.791492 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:28:46.791498 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:28:46.791504 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 12 17:28:46.791510 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:28:46.791516 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:28:46.791522 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Dec 12 17:28:46.791528 kernel: Zone ranges: Dec 12 17:28:46.791534 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:28:46.791541 kernel: DMA32 empty Dec 12 17:28:46.791547 kernel: Normal empty Dec 12 17:28:46.791553 kernel: Device empty Dec 12 17:28:46.791559 kernel: Movable zone start for each node Dec 12 17:28:46.791565 kernel: Early memory node ranges Dec 12 17:28:46.791571 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Dec 12 17:28:46.791577 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Dec 12 17:28:46.791583 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Dec 12 17:28:46.791589 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Dec 12 17:28:46.791595 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Dec 12 17:28:46.791601 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Dec 12 17:28:46.791607 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Dec 12 17:28:46.791615 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Dec 12 17:28:46.791621 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Dec 12 17:28:46.791627 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 12 17:28:46.791636 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 12 17:28:46.791642 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 12 17:28:46.791649 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 12 17:28:46.791657 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:28:46.791663 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 12 17:28:46.791670 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Dec 12 17:28:46.791676 kernel: psci: probing for conduit method from ACPI. Dec 12 17:28:46.791689 kernel: psci: PSCIv1.1 detected in firmware. Dec 12 17:28:46.791695 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:28:46.791702 kernel: psci: Trusted OS migration not required Dec 12 17:28:46.791708 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:28:46.791717 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 12 17:28:46.791725 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:28:46.791733 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:28:46.791740 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 12 17:28:46.791747 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:28:46.791753 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:28:46.791759 kernel: CPU features: detected: Spectre-v4 Dec 12 17:28:46.791766 kernel: CPU features: detected: Spectre-BHB Dec 12 17:28:46.791772 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:28:46.791779 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:28:46.791785 kernel: CPU features: detected: ARM erratum 1418040 Dec 12 17:28:46.791792 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:28:46.791798 kernel: alternatives: applying boot alternatives Dec 12 17:28:46.791805 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:28:46.791814 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:28:46.791820 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:28:46.791826 kernel: Fallback order for Node 0: 0 Dec 12 17:28:46.791833 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 12 17:28:46.791839 kernel: Policy zone: DMA Dec 12 17:28:46.791845 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:28:46.791851 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 12 17:28:46.791858 kernel: software IO TLB: area num 4. Dec 12 17:28:46.791864 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 12 17:28:46.791870 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Dec 12 17:28:46.791877 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 17:28:46.791885 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:28:46.791892 kernel: rcu: RCU event tracing is enabled. Dec 12 17:28:46.791899 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 17:28:46.791905 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:28:46.791911 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:28:46.791917 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:28:46.791924 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 17:28:46.791930 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:28:46.791937 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:28:46.791943 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:28:46.791950 kernel: GICv3: 256 SPIs implemented Dec 12 17:28:46.791958 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:28:46.791964 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:28:46.791970 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 12 17:28:46.792034 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:28:46.792041 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 12 17:28:46.792048 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 12 17:28:46.792054 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:28:46.792061 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:28:46.792068 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 12 17:28:46.792074 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 12 17:28:46.792081 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:28:46.792087 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:28:46.792096 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 12 17:28:46.792103 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 12 17:28:46.792109 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 12 17:28:46.792115 kernel: arm-pv: using stolen time PV Dec 12 17:28:46.792122 kernel: Console: colour dummy device 80x25 Dec 12 17:28:46.792129 kernel: ACPI: Core revision 20240827 Dec 12 17:28:46.792135 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 12 17:28:46.792142 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:28:46.792149 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:28:46.792155 kernel: landlock: Up and running. Dec 12 17:28:46.792163 kernel: SELinux: Initializing. Dec 12 17:28:46.792170 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:28:46.792176 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:28:46.792183 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:28:46.792190 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:28:46.792197 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:28:46.792203 kernel: Remapping and enabling EFI services. Dec 12 17:28:46.792210 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:28:46.792217 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:28:46.792229 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 12 17:28:46.792236 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 12 17:28:46.792243 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:28:46.792251 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 12 17:28:46.792264 kernel: Detected PIPT I-cache on CPU2 Dec 12 17:28:46.792271 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 12 17:28:46.792278 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 12 17:28:46.792285 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:28:46.792294 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 12 17:28:46.792309 kernel: Detected PIPT I-cache on CPU3 Dec 12 17:28:46.792316 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 12 17:28:46.792323 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 12 17:28:46.792330 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:28:46.792336 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 12 17:28:46.792343 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 17:28:46.792350 kernel: SMP: Total of 4 processors activated. Dec 12 17:28:46.792357 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:28:46.792365 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:28:46.792373 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:28:46.792379 kernel: CPU features: detected: Common not Private translations Dec 12 17:28:46.792386 kernel: CPU features: detected: CRC32 instructions Dec 12 17:28:46.792393 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 12 17:28:46.792400 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:28:46.792407 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:28:46.792414 kernel: CPU features: detected: Privileged Access Never Dec 12 17:28:46.792421 kernel: CPU features: detected: RAS Extension Support Dec 12 17:28:46.792429 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:28:46.792436 kernel: alternatives: applying system-wide alternatives Dec 12 17:28:46.792443 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 12 17:28:46.792450 kernel: Memory: 2423776K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 126176K reserved, 16384K cma-reserved) Dec 12 17:28:46.792457 kernel: devtmpfs: initialized Dec 12 17:28:46.792464 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:28:46.792471 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 17:28:46.792478 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:28:46.792485 kernel: 0 pages in range for non-PLT usage Dec 12 17:28:46.792493 kernel: 508400 pages in range for PLT usage Dec 12 17:28:46.792500 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:28:46.792507 kernel: SMBIOS 3.0.0 present. Dec 12 17:28:46.792514 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 12 17:28:46.792521 kernel: DMI: Memory slots populated: 1/1 Dec 12 17:28:46.792527 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:28:46.792534 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:28:46.792541 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:28:46.792548 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:28:46.792557 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:28:46.792564 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Dec 12 17:28:46.792570 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:28:46.792577 kernel: cpuidle: using governor menu Dec 12 17:28:46.792584 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:28:46.792591 kernel: ASID allocator initialised with 32768 entries Dec 12 17:28:46.792598 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:28:46.792605 kernel: Serial: AMBA PL011 UART driver Dec 12 17:28:46.792612 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:28:46.792620 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:28:46.792627 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:28:46.792634 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:28:46.792641 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:28:46.792647 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:28:46.792654 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:28:46.792661 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:28:46.792668 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:28:46.792675 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:28:46.792683 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:28:46.792690 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:28:46.792697 kernel: ACPI: Interpreter enabled Dec 12 17:28:46.792704 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:28:46.792711 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:28:46.792718 kernel: ACPI: CPU0 has been hot-added Dec 12 17:28:46.792725 kernel: ACPI: CPU1 has been hot-added Dec 12 17:28:46.792732 kernel: ACPI: CPU2 has been hot-added Dec 12 17:28:46.792739 kernel: ACPI: CPU3 has been hot-added Dec 12 17:28:46.792746 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:28:46.792755 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:28:46.792763 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 17:28:46.792906 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:28:46.792985 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:28:46.793074 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:28:46.793137 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 12 17:28:46.793197 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 12 17:28:46.793210 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 12 17:28:46.793217 kernel: PCI host bridge to bus 0000:00 Dec 12 17:28:46.793291 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 12 17:28:46.793347 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:28:46.793399 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 12 17:28:46.793452 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 17:28:46.793528 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:28:46.793603 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 17:28:46.793702 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 12 17:28:46.793763 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 12 17:28:46.793829 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 12 17:28:46.793900 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 12 17:28:46.793969 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 12 17:28:46.794063 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 12 17:28:46.794127 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 12 17:28:46.794203 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:28:46.794274 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 12 17:28:46.794294 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:28:46.794301 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:28:46.794308 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:28:46.794315 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:28:46.794325 kernel: iommu: Default domain type: Translated Dec 12 17:28:46.794332 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:28:46.794339 kernel: efivars: Registered efivars operations Dec 12 17:28:46.794346 kernel: vgaarb: loaded Dec 12 17:28:46.794353 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:28:46.794360 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:28:46.794367 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:28:46.794374 kernel: pnp: PnP ACPI init Dec 12 17:28:46.794449 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 12 17:28:46.794461 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:28:46.794472 kernel: NET: Registered PF_INET protocol family Dec 12 17:28:46.794479 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:28:46.794486 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:28:46.794493 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:28:46.794501 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:28:46.794508 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:28:46.794515 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:28:46.794523 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:28:46.794530 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:28:46.794537 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:28:46.794544 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:28:46.794551 kernel: kvm [1]: HYP mode not available Dec 12 17:28:46.794558 kernel: Initialise system trusted keyrings Dec 12 17:28:46.794565 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:28:46.794571 kernel: Key type asymmetric registered Dec 12 17:28:46.794578 kernel: Asymmetric key parser 'x509' registered Dec 12 17:28:46.794587 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:28:46.794594 kernel: io scheduler mq-deadline registered Dec 12 17:28:46.794601 kernel: io scheduler kyber registered Dec 12 17:28:46.794607 kernel: io scheduler bfq registered Dec 12 17:28:46.794614 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:28:46.794621 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:28:46.794628 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:28:46.794690 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 12 17:28:46.794700 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:28:46.794709 kernel: thunder_xcv, ver 1.0 Dec 12 17:28:46.794715 kernel: thunder_bgx, ver 1.0 Dec 12 17:28:46.794722 kernel: nicpf, ver 1.0 Dec 12 17:28:46.794729 kernel: nicvf, ver 1.0 Dec 12 17:28:46.794795 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:28:46.794852 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:28:46 UTC (1765560526) Dec 12 17:28:46.794861 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:28:46.794868 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:28:46.794877 kernel: watchdog: NMI not fully supported Dec 12 17:28:46.794884 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:28:46.794891 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:28:46.794898 kernel: Segment Routing with IPv6 Dec 12 17:28:46.794904 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:28:46.794911 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:28:46.794918 kernel: Key type dns_resolver registered Dec 12 17:28:46.794925 kernel: registered taskstats version 1 Dec 12 17:28:46.794932 kernel: Loading compiled-in X.509 certificates Dec 12 17:28:46.794939 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 12 17:28:46.794947 kernel: Demotion targets for Node 0: null Dec 12 17:28:46.794954 kernel: Key type .fscrypt registered Dec 12 17:28:46.794961 kernel: Key type fscrypt-provisioning registered Dec 12 17:28:46.794968 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:28:46.794984 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:28:46.795001 kernel: ima: No architecture policies found Dec 12 17:28:46.795009 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:28:46.795016 kernel: clk: Disabling unused clocks Dec 12 17:28:46.795023 kernel: PM: genpd: Disabling unused power domains Dec 12 17:28:46.795032 kernel: Warning: unable to open an initial console. Dec 12 17:28:46.795039 kernel: Freeing unused kernel memory: 39552K Dec 12 17:28:46.795046 kernel: Run /init as init process Dec 12 17:28:46.795053 kernel: with arguments: Dec 12 17:28:46.795060 kernel: /init Dec 12 17:28:46.795067 kernel: with environment: Dec 12 17:28:46.795073 kernel: HOME=/ Dec 12 17:28:46.795080 kernel: TERM=linux Dec 12 17:28:46.795088 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:28:46.795100 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:28:46.795108 systemd[1]: Detected virtualization kvm. Dec 12 17:28:46.795115 systemd[1]: Detected architecture arm64. Dec 12 17:28:46.795125 systemd[1]: Running in initrd. Dec 12 17:28:46.795134 systemd[1]: No hostname configured, using default hostname. Dec 12 17:28:46.795144 systemd[1]: Hostname set to . Dec 12 17:28:46.795154 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:28:46.795164 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:28:46.795172 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:28:46.795179 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:28:46.795187 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:28:46.795195 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:28:46.795202 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:28:46.795211 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:28:46.795220 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 17:28:46.795228 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 17:28:46.795236 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:28:46.795243 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:28:46.795251 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:28:46.795265 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:28:46.795274 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:28:46.795284 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:28:46.795294 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:28:46.795302 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:28:46.795309 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:28:46.795317 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:28:46.795324 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:28:46.795332 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:28:46.795339 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:28:46.795347 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:28:46.795355 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:28:46.795363 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:28:46.795370 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:28:46.795378 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:28:46.795386 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:28:46.795393 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:28:46.795400 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:28:46.795408 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:28:46.795415 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:28:46.795425 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:28:46.795433 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:28:46.795440 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:28:46.795466 systemd-journald[246]: Collecting audit messages is disabled. Dec 12 17:28:46.795486 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:28:46.795494 systemd-journald[246]: Journal started Dec 12 17:28:46.795513 systemd-journald[246]: Runtime Journal (/run/log/journal/0b6d2d66686547e2bee746ad1c34c503) is 6M, max 48.5M, 42.4M free. Dec 12 17:28:46.794262 systemd-modules-load[247]: Inserted module 'overlay' Dec 12 17:28:46.797749 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:28:46.801249 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:28:46.802851 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:28:46.809995 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:28:46.812778 systemd-modules-load[247]: Inserted module 'br_netfilter' Dec 12 17:28:46.813717 kernel: Bridge firewalling registered Dec 12 17:28:46.814093 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:28:46.815513 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:28:46.819496 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:28:46.821131 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:28:46.822888 systemd-tmpfiles[265]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:28:46.832121 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:28:46.833541 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:28:46.836182 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:28:46.840562 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:28:46.845170 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:28:46.848224 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:28:46.857847 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:28:46.887065 systemd-resolved[290]: Positive Trust Anchors: Dec 12 17:28:46.887082 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:28:46.887113 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:28:46.892109 systemd-resolved[290]: Defaulting to hostname 'linux'. Dec 12 17:28:46.893062 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:28:46.897911 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:28:46.938009 kernel: SCSI subsystem initialized Dec 12 17:28:46.942992 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:28:46.951012 kernel: iscsi: registered transport (tcp) Dec 12 17:28:46.964003 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:28:46.964029 kernel: QLogic iSCSI HBA Driver Dec 12 17:28:46.980887 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:28:47.010045 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:28:47.011623 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:28:47.059771 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:28:47.062016 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:28:47.124010 kernel: raid6: neonx8 gen() 15786 MB/s Dec 12 17:28:47.140993 kernel: raid6: neonx4 gen() 15792 MB/s Dec 12 17:28:47.158013 kernel: raid6: neonx2 gen() 13098 MB/s Dec 12 17:28:47.174998 kernel: raid6: neonx1 gen() 10442 MB/s Dec 12 17:28:47.191992 kernel: raid6: int64x8 gen() 6874 MB/s Dec 12 17:28:47.208996 kernel: raid6: int64x4 gen() 7347 MB/s Dec 12 17:28:47.226006 kernel: raid6: int64x2 gen() 6101 MB/s Dec 12 17:28:47.243036 kernel: raid6: int64x1 gen() 5047 MB/s Dec 12 17:28:47.243064 kernel: raid6: using algorithm neonx4 gen() 15792 MB/s Dec 12 17:28:47.261023 kernel: raid6: .... xor() 12348 MB/s, rmw enabled Dec 12 17:28:47.261065 kernel: raid6: using neon recovery algorithm Dec 12 17:28:47.265995 kernel: xor: measuring software checksum speed Dec 12 17:28:47.266017 kernel: 8regs : 19303 MB/sec Dec 12 17:28:47.267085 kernel: 32regs : 21687 MB/sec Dec 12 17:28:47.268237 kernel: arm64_neon : 28080 MB/sec Dec 12 17:28:47.268257 kernel: xor: using function: arm64_neon (28080 MB/sec) Dec 12 17:28:47.320009 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:28:47.326553 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:28:47.329155 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:28:47.359346 systemd-udevd[499]: Using default interface naming scheme 'v255'. Dec 12 17:28:47.364370 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:28:47.371129 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:28:47.399259 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Dec 12 17:28:47.423153 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:28:47.425436 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:28:47.479532 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:28:47.482103 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:28:47.531462 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 12 17:28:47.531637 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 12 17:28:47.539422 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:28:47.539480 kernel: GPT:9289727 != 19775487 Dec 12 17:28:47.539491 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:28:47.544121 kernel: GPT:9289727 != 19775487 Dec 12 17:28:47.545534 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:28:47.545569 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:28:47.545043 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:28:47.545156 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:28:47.547648 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:28:47.550022 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:28:47.583007 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 17:28:47.585337 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:28:47.586609 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:28:47.594648 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 17:28:47.605981 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 17:28:47.607144 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 12 17:28:47.615655 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:28:47.616885 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:28:47.619130 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:28:47.621223 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:28:47.623872 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:28:47.625662 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:28:47.645813 disk-uuid[590]: Primary Header is updated. Dec 12 17:28:47.645813 disk-uuid[590]: Secondary Entries is updated. Dec 12 17:28:47.645813 disk-uuid[590]: Secondary Header is updated. Dec 12 17:28:47.649522 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:28:47.651241 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:28:48.659860 disk-uuid[595]: The operation has completed successfully. Dec 12 17:28:48.661217 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:28:48.688834 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:28:48.688947 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:28:48.713602 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 17:28:48.731012 sh[610]: Success Dec 12 17:28:48.743004 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:28:48.743053 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:28:48.743064 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:28:48.752008 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:28:48.780715 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:28:48.783225 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 17:28:48.798908 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 17:28:48.805002 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (622) Dec 12 17:28:48.805154 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 12 17:28:48.807002 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:28:48.811371 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:28:48.811408 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:28:48.812366 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 17:28:48.813597 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:28:48.814937 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:28:48.815706 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:28:48.817329 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:28:48.841999 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (651) Dec 12 17:28:48.845283 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:28:48.845389 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:28:48.849014 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:28:48.849055 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:28:48.853030 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:28:48.854429 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:28:48.857062 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:28:48.925682 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:28:48.929877 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:28:48.955633 ignition[698]: Ignition 2.22.0 Dec 12 17:28:48.955644 ignition[698]: Stage: fetch-offline Dec 12 17:28:48.955676 ignition[698]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:28:48.955687 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:28:48.955773 ignition[698]: parsed url from cmdline: "" Dec 12 17:28:48.955776 ignition[698]: no config URL provided Dec 12 17:28:48.955784 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:28:48.955791 ignition[698]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:28:48.955811 ignition[698]: op(1): [started] loading QEMU firmware config module Dec 12 17:28:48.955815 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 12 17:28:48.960982 ignition[698]: op(1): [finished] loading QEMU firmware config module Dec 12 17:28:48.971030 systemd-networkd[799]: lo: Link UP Dec 12 17:28:48.971042 systemd-networkd[799]: lo: Gained carrier Dec 12 17:28:48.971731 systemd-networkd[799]: Enumeration completed Dec 12 17:28:48.971842 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:28:48.972125 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:28:48.972128 systemd-networkd[799]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:28:48.973204 systemd-networkd[799]: eth0: Link UP Dec 12 17:28:48.973361 systemd-networkd[799]: eth0: Gained carrier Dec 12 17:28:48.973371 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:28:48.973410 systemd[1]: Reached target network.target - Network. Dec 12 17:28:48.992023 systemd-networkd[799]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:28:49.017069 ignition[698]: parsing config with SHA512: f2ce42f22d277b24a9a23f95dbd5efcb9710936a4ccc5b2e4ea1f833d9a65a660acf5d127e516e6bc2749c95750cc7eeefda28a44ac6cdc166d677af1eeb3cce Dec 12 17:28:49.022303 unknown[698]: fetched base config from "system" Dec 12 17:28:49.023020 unknown[698]: fetched user config from "qemu" Dec 12 17:28:49.023430 ignition[698]: fetch-offline: fetch-offline passed Dec 12 17:28:49.025754 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:28:49.023488 ignition[698]: Ignition finished successfully Dec 12 17:28:49.027101 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 12 17:28:49.027859 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:28:49.070576 ignition[810]: Ignition 2.22.0 Dec 12 17:28:49.070592 ignition[810]: Stage: kargs Dec 12 17:28:49.070731 ignition[810]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:28:49.070740 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:28:49.071596 ignition[810]: kargs: kargs passed Dec 12 17:28:49.074386 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:28:49.071642 ignition[810]: Ignition finished successfully Dec 12 17:28:49.076414 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:28:49.109585 ignition[818]: Ignition 2.22.0 Dec 12 17:28:49.109600 ignition[818]: Stage: disks Dec 12 17:28:49.109733 ignition[818]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:28:49.112946 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:28:49.109742 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:28:49.114029 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:28:49.110501 ignition[818]: disks: disks passed Dec 12 17:28:49.115713 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:28:49.110545 ignition[818]: Ignition finished successfully Dec 12 17:28:49.117963 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:28:49.119852 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:28:49.121196 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:28:49.123768 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:28:49.163564 systemd-fsck[827]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 17:28:49.169209 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:28:49.171453 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:28:49.241009 kernel: EXT4-fs (vda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 12 17:28:49.241361 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:28:49.242749 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:28:49.245621 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:28:49.247369 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:28:49.248430 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:28:49.248479 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:28:49.248523 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:28:49.269946 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:28:49.272751 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:28:49.277662 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (835) Dec 12 17:28:49.277698 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:28:49.278672 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:28:49.281594 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:28:49.281641 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:28:49.283189 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:28:49.309770 initrd-setup-root[859]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:28:49.314145 initrd-setup-root[866]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:28:49.318934 initrd-setup-root[873]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:28:49.323102 initrd-setup-root[880]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:28:49.394541 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:28:49.396634 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:28:49.398305 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:28:49.417043 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:28:49.429066 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:28:49.437930 ignition[949]: INFO : Ignition 2.22.0 Dec 12 17:28:49.437930 ignition[949]: INFO : Stage: mount Dec 12 17:28:49.439678 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:28:49.439678 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:28:49.439678 ignition[949]: INFO : mount: mount passed Dec 12 17:28:49.439678 ignition[949]: INFO : Ignition finished successfully Dec 12 17:28:49.440474 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:28:49.443771 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:28:49.804828 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:28:49.806398 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:28:49.822788 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (963) Dec 12 17:28:49.822826 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:28:49.822836 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:28:49.826297 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:28:49.826324 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:28:49.827773 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:28:49.860329 ignition[980]: INFO : Ignition 2.22.0 Dec 12 17:28:49.860329 ignition[980]: INFO : Stage: files Dec 12 17:28:49.861967 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:28:49.861967 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:28:49.861967 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:28:49.865319 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:28:49.865319 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:28:49.865319 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:28:49.865319 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:28:49.865319 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:28:49.864737 unknown[980]: wrote ssh authorized keys file for user: core Dec 12 17:28:49.872324 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:28:49.872324 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 12 17:28:49.932036 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:28:50.105031 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:28:50.105031 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 17:28:50.108740 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 12 17:28:50.292345 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 12 17:28:50.354047 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 17:28:50.354047 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:28:50.358776 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:28:50.358776 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:28:50.358776 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:28:50.358776 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:28:50.358776 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:28:50.358776 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:28:50.358776 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:28:50.358776 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:28:50.358776 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:28:50.358776 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:28:50.358776 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:28:50.358776 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:28:50.358776 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Dec 12 17:28:50.562181 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 12 17:28:50.716865 systemd-networkd[799]: eth0: Gained IPv6LL Dec 12 17:28:50.803439 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:28:50.803439 ignition[980]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 12 17:28:50.807113 ignition[980]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:28:50.807113 ignition[980]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:28:50.807113 ignition[980]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 12 17:28:50.807113 ignition[980]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 12 17:28:50.807113 ignition[980]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:28:50.807113 ignition[980]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:28:50.807113 ignition[980]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 12 17:28:50.807113 ignition[980]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 12 17:28:50.821293 ignition[980]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:28:50.826144 ignition[980]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:28:50.827556 ignition[980]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 12 17:28:50.827556 ignition[980]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:28:50.827556 ignition[980]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:28:50.827556 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:28:50.827556 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:28:50.827556 ignition[980]: INFO : files: files passed Dec 12 17:28:50.827556 ignition[980]: INFO : Ignition finished successfully Dec 12 17:28:50.829405 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:28:50.833115 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:28:50.835596 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:28:50.848671 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:28:50.848799 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:28:50.853054 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Dec 12 17:28:50.854674 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:28:50.854674 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:28:50.860866 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:28:50.855358 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:28:50.858340 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:28:50.860258 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:28:50.889847 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:28:50.889993 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:28:50.892130 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:28:50.893953 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:28:50.895710 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:28:50.896507 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:28:50.917059 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:28:50.920133 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:28:50.944082 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:28:50.951446 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:28:50.955495 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:28:50.956563 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:28:50.956687 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:28:50.960066 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:28:50.962018 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:28:50.963789 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:28:50.965782 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:28:50.967941 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:28:50.970165 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:28:50.972162 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:28:50.974131 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:28:50.976163 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:28:50.978230 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:28:50.980056 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:28:50.981642 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:28:50.981775 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:28:50.984342 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:28:50.986313 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:28:50.988386 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:28:50.989218 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:28:50.990548 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:28:50.990669 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:28:50.993570 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:28:50.993685 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:28:50.995582 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:28:50.997255 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:28:50.998091 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:28:50.999386 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:28:51.001171 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:28:51.002952 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:28:51.003045 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:28:51.005022 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:28:51.005110 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:28:51.007483 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:28:51.007602 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:28:51.009312 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:28:51.009411 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:28:51.011870 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:28:51.014275 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:28:51.015150 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:28:51.015297 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:28:51.017212 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:28:51.017312 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:28:51.023601 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:28:51.028404 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:28:51.040003 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:28:51.045325 ignition[1036]: INFO : Ignition 2.22.0 Dec 12 17:28:51.045325 ignition[1036]: INFO : Stage: umount Dec 12 17:28:51.048272 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:28:51.048272 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:28:51.048272 ignition[1036]: INFO : umount: umount passed Dec 12 17:28:51.048272 ignition[1036]: INFO : Ignition finished successfully Dec 12 17:28:51.048086 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:28:51.048221 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:28:51.049561 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:28:51.049675 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:28:51.051724 systemd[1]: Stopped target network.target - Network. Dec 12 17:28:51.053025 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:28:51.053101 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:28:51.054785 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:28:51.054833 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:28:51.056514 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:28:51.056565 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:28:51.058224 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:28:51.058278 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:28:51.060096 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:28:51.060149 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:28:51.062220 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:28:51.064029 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:28:51.070505 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:28:51.070623 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:28:51.074602 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 17:28:51.074872 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:28:51.074907 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:28:51.078503 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:28:51.078683 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:28:51.078775 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:28:51.081663 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 17:28:51.082108 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:28:51.084218 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:28:51.084267 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:28:51.087046 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:28:51.087878 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:28:51.087934 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:28:51.090495 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:28:51.090544 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:28:51.093598 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:28:51.093642 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:28:51.095902 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:28:51.100816 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 17:28:51.114814 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:28:51.116388 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:28:51.117969 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:28:51.118060 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:28:51.119904 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:28:51.119936 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:28:51.121862 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:28:51.121908 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:28:51.124755 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:28:51.124805 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:28:51.127693 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:28:51.127743 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:28:51.131572 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:28:51.132666 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:28:51.132721 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:28:51.135920 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:28:51.135968 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:28:51.139041 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 17:28:51.139084 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:28:51.142270 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:28:51.142310 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:28:51.145173 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:28:51.145217 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:28:51.148923 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:28:51.149033 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:28:51.150345 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:28:51.152020 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:28:51.154715 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:28:51.156617 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:28:51.175674 systemd[1]: Switching root. Dec 12 17:28:51.204514 systemd-journald[246]: Journal stopped Dec 12 17:28:52.033304 systemd-journald[246]: Received SIGTERM from PID 1 (systemd). Dec 12 17:28:52.033363 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:28:52.033375 kernel: SELinux: policy capability open_perms=1 Dec 12 17:28:52.033384 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:28:52.033397 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:28:52.033413 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:28:52.033422 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:28:52.033431 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:28:52.033440 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:28:52.033454 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:28:52.033465 kernel: audit: type=1403 audit(1765560531.387:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 17:28:52.033476 systemd[1]: Successfully loaded SELinux policy in 46.573ms. Dec 12 17:28:52.033496 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.622ms. Dec 12 17:28:52.033508 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:28:52.033519 systemd[1]: Detected virtualization kvm. Dec 12 17:28:52.033529 systemd[1]: Detected architecture arm64. Dec 12 17:28:52.033540 systemd[1]: Detected first boot. Dec 12 17:28:52.033550 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:28:52.033560 zram_generator::config[1082]: No configuration found. Dec 12 17:28:52.033572 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:28:52.033582 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:28:52.033593 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 17:28:52.033604 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:28:52.033614 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:28:52.033624 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:28:52.033635 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:28:52.033645 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:28:52.033657 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:28:52.033667 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:28:52.033677 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:28:52.033687 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:28:52.033698 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:28:52.033709 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:28:52.033719 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:28:52.033730 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:28:52.033740 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:28:52.033752 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:28:52.033762 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:28:52.033773 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:28:52.033783 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:28:52.033793 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:28:52.033803 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:28:52.033813 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:28:52.033825 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:28:52.033835 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:28:52.033845 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:28:52.033856 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:28:52.033868 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:28:52.033878 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:28:52.033888 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:28:52.033898 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:28:52.033908 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:28:52.033920 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:28:52.033930 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:28:52.033940 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:28:52.033954 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:28:52.033965 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:28:52.034063 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:28:52.034075 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:28:52.034086 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:28:52.034097 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:28:52.034110 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:28:52.034121 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:28:52.034132 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:28:52.034143 systemd[1]: Reached target machines.target - Containers. Dec 12 17:28:52.034220 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:28:52.034246 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:28:52.034259 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:28:52.034270 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:28:52.034281 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:28:52.034296 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:28:52.034308 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:28:52.034318 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:28:52.034330 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:28:52.034340 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:28:52.034351 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:28:52.034361 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:28:52.034373 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:28:52.034385 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:28:52.034397 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:28:52.034407 kernel: fuse: init (API version 7.41) Dec 12 17:28:52.034419 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:28:52.034430 kernel: ACPI: bus type drm_connector registered Dec 12 17:28:52.034440 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:28:52.034450 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:28:52.034461 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:28:52.034470 kernel: loop: module loaded Dec 12 17:28:52.034481 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:28:52.034492 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:28:52.034530 systemd-journald[1154]: Collecting audit messages is disabled. Dec 12 17:28:52.034554 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 17:28:52.034567 systemd[1]: Stopped verity-setup.service. Dec 12 17:28:52.034579 systemd-journald[1154]: Journal started Dec 12 17:28:52.034600 systemd-journald[1154]: Runtime Journal (/run/log/journal/0b6d2d66686547e2bee746ad1c34c503) is 6M, max 48.5M, 42.4M free. Dec 12 17:28:51.792500 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:28:51.817170 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 17:28:51.817594 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:28:52.037990 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:28:52.039914 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:28:52.041207 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:28:52.042469 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:28:52.043844 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:28:52.045210 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:28:52.046435 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:28:52.049009 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:28:52.050413 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:28:52.051882 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:28:52.052073 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:28:52.053444 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:28:52.053595 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:28:52.054899 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:28:52.057072 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:28:52.058282 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:28:52.058439 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:28:52.059876 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:28:52.060046 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:28:52.061378 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:28:52.061540 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:28:52.064386 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:28:52.065712 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:28:52.067217 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:28:52.068631 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:28:52.082652 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:28:52.084955 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:28:52.086834 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:28:52.088062 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:28:52.088103 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:28:52.089821 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:28:52.096746 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:28:52.097920 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:28:52.099026 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:28:52.100910 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:28:52.102285 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:28:52.106100 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:28:52.107602 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:28:52.109201 systemd-journald[1154]: Time spent on flushing to /var/log/journal/0b6d2d66686547e2bee746ad1c34c503 is 16.177ms for 886 entries. Dec 12 17:28:52.109201 systemd-journald[1154]: System Journal (/var/log/journal/0b6d2d66686547e2bee746ad1c34c503) is 8M, max 195.6M, 187.6M free. Dec 12 17:28:52.128918 systemd-journald[1154]: Received client request to flush runtime journal. Dec 12 17:28:52.109668 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:28:52.113197 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:28:52.116207 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:28:52.118859 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:28:52.121923 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:28:52.123285 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:28:52.129036 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:28:52.130715 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:28:52.133987 kernel: loop0: detected capacity change from 0 to 200800 Dec 12 17:28:52.137609 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:28:52.142138 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:28:52.144989 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:28:52.158381 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Dec 12 17:28:52.158405 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Dec 12 17:28:52.160186 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:28:52.163005 kernel: loop1: detected capacity change from 0 to 100632 Dec 12 17:28:52.163966 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:28:52.167546 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:28:52.172852 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:28:52.175139 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:28:52.194001 kernel: loop2: detected capacity change from 0 to 119840 Dec 12 17:28:52.194200 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:28:52.197134 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:28:52.218368 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Dec 12 17:28:52.218383 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Dec 12 17:28:52.221007 kernel: loop3: detected capacity change from 0 to 200800 Dec 12 17:28:52.222116 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:28:52.230117 kernel: loop4: detected capacity change from 0 to 100632 Dec 12 17:28:52.236997 kernel: loop5: detected capacity change from 0 to 119840 Dec 12 17:28:52.241622 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 12 17:28:52.242027 (sd-merge)[1224]: Merged extensions into '/usr'. Dec 12 17:28:52.245314 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:28:52.245432 systemd[1]: Reloading... Dec 12 17:28:52.313024 zram_generator::config[1250]: No configuration found. Dec 12 17:28:52.378089 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:28:52.454407 systemd[1]: Reloading finished in 208 ms. Dec 12 17:28:52.485726 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:28:52.489006 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:28:52.507302 systemd[1]: Starting ensure-sysext.service... Dec 12 17:28:52.509163 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:28:52.517942 systemd[1]: Reload requested from client PID 1287 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:28:52.517982 systemd[1]: Reloading... Dec 12 17:28:52.523677 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:28:52.523713 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:28:52.523961 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:28:52.524186 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 17:28:52.524786 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 17:28:52.525016 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Dec 12 17:28:52.525062 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Dec 12 17:28:52.527947 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:28:52.527962 systemd-tmpfiles[1288]: Skipping /boot Dec 12 17:28:52.533954 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:28:52.533992 systemd-tmpfiles[1288]: Skipping /boot Dec 12 17:28:52.576010 zram_generator::config[1313]: No configuration found. Dec 12 17:28:52.704287 systemd[1]: Reloading finished in 186 ms. Dec 12 17:28:52.731014 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:28:52.736555 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:28:52.755285 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:28:52.757869 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:28:52.766875 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:28:52.770003 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:28:52.773131 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:28:52.777239 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:28:52.783407 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:28:52.791090 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:28:52.794318 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:28:52.798161 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:28:52.799651 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:28:52.800032 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:28:52.802798 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:28:52.805460 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:28:52.807537 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:28:52.807721 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:28:52.810966 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:28:52.811389 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:28:52.814623 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:28:52.814817 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:28:52.823574 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:28:52.827449 augenrules[1383]: No rules Dec 12 17:28:52.829577 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:28:52.829792 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:28:52.830554 systemd-udevd[1356]: Using default interface naming scheme 'v255'. Dec 12 17:28:52.837013 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:28:52.842224 systemd[1]: Finished ensure-sysext.service. Dec 12 17:28:52.847140 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:28:52.848239 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:28:52.851190 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:28:52.854730 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:28:52.858215 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:28:52.863187 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:28:52.864325 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:28:52.864373 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:28:52.868644 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 17:28:52.872432 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:28:52.873548 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:28:52.875969 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:28:52.877373 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:28:52.878734 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:28:52.878965 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:28:52.889866 augenrules[1393]: /sbin/augenrules: No change Dec 12 17:28:52.901404 augenrules[1449]: No rules Dec 12 17:28:52.909167 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:28:52.911144 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:28:52.911764 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:28:52.913499 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:28:52.915766 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:28:52.917193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:28:52.918115 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:28:52.919644 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:28:52.920100 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:28:52.921706 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:28:52.930300 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:28:52.930365 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:28:52.945408 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:28:53.001627 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:28:53.005581 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:28:53.038193 systemd-resolved[1354]: Positive Trust Anchors: Dec 12 17:28:53.038211 systemd-resolved[1354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:28:53.038249 systemd-resolved[1354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:28:53.040497 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:28:53.043716 systemd-networkd[1454]: lo: Link UP Dec 12 17:28:53.044029 systemd-networkd[1454]: lo: Gained carrier Dec 12 17:28:53.044460 systemd-resolved[1354]: Defaulting to hostname 'linux'. Dec 12 17:28:53.045478 systemd-networkd[1454]: Enumeration completed Dec 12 17:28:53.045785 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:28:53.046576 systemd-networkd[1454]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:28:53.046805 systemd-networkd[1454]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:28:53.047068 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:28:53.047831 systemd-networkd[1454]: eth0: Link UP Dec 12 17:28:53.047949 systemd-networkd[1454]: eth0: Gained carrier Dec 12 17:28:53.047964 systemd-networkd[1454]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:28:53.048949 systemd[1]: Reached target network.target - Network. Dec 12 17:28:53.049824 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:28:53.052776 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:28:53.055753 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:28:53.059085 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 17:28:53.060456 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:28:53.061639 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:28:53.062911 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:28:53.063132 systemd-networkd[1454]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:28:53.063767 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Dec 12 17:28:53.064274 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:28:53.064800 systemd-timesyncd[1423]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 12 17:28:53.064853 systemd-timesyncd[1423]: Initial clock synchronization to Fri 2025-12-12 17:28:53.003839 UTC. Dec 12 17:28:53.066106 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:28:53.066142 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:28:53.066986 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:28:53.068106 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:28:53.070292 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:28:53.071635 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:28:53.073399 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:28:53.076003 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:28:53.079571 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:28:53.081021 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:28:53.083988 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:28:53.091549 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:28:53.093395 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:28:53.095885 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:28:53.097920 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:28:53.098931 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:28:53.099966 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:28:53.100000 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:28:53.102077 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:28:53.104366 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:28:53.107078 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:28:53.110002 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:28:53.113016 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:28:53.113963 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:28:53.115094 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:28:53.119042 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:28:53.121221 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:28:53.125707 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:28:53.127159 jq[1495]: false Dec 12 17:28:53.129792 extend-filesystems[1496]: Found /dev/vda6 Dec 12 17:28:53.130177 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:28:53.132968 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:28:53.133422 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:28:53.133709 extend-filesystems[1496]: Found /dev/vda9 Dec 12 17:28:53.135105 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:28:53.136559 extend-filesystems[1496]: Checking size of /dev/vda9 Dec 12 17:28:53.139129 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:28:53.142052 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:28:53.147695 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:28:53.150281 jq[1511]: true Dec 12 17:28:53.150408 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:28:53.150582 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:28:53.152502 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:28:53.152697 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:28:53.154105 extend-filesystems[1496]: Resized partition /dev/vda9 Dec 12 17:28:53.159253 extend-filesystems[1523]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:28:53.167106 (ntainerd)[1527]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 17:28:53.173250 jq[1524]: true Dec 12 17:28:53.173995 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 12 17:28:53.178255 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:28:53.185527 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:28:53.187019 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:28:53.192875 update_engine[1508]: I20251212 17:28:53.192653 1508 main.cc:92] Flatcar Update Engine starting Dec 12 17:28:53.199469 tar[1522]: linux-arm64/LICENSE Dec 12 17:28:53.199469 tar[1522]: linux-arm64/helm Dec 12 17:28:53.201642 dbus-daemon[1492]: [system] SELinux support is enabled Dec 12 17:28:53.201824 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:28:53.206002 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 12 17:28:53.206768 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:28:53.207295 update_engine[1508]: I20251212 17:28:53.206897 1508 update_check_scheduler.cc:74] Next update check in 5m33s Dec 12 17:28:53.206804 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:28:53.209301 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:28:53.209325 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:28:53.215363 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:28:53.223711 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:28:53.225081 extend-filesystems[1523]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 17:28:53.225081 extend-filesystems[1523]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 12 17:28:53.225081 extend-filesystems[1523]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 12 17:28:53.230430 extend-filesystems[1496]: Resized filesystem in /dev/vda9 Dec 12 17:28:53.231160 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:28:53.231383 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:28:53.264971 bash[1557]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:28:53.269031 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:28:53.271758 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 17:28:53.298903 systemd-logind[1505]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:28:53.300861 systemd-logind[1505]: New seat seat0. Dec 12 17:28:53.301509 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:28:53.319505 locksmithd[1543]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:28:53.326071 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:28:53.368983 containerd[1527]: time="2025-12-12T17:28:53Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:28:53.370227 containerd[1527]: time="2025-12-12T17:28:53.370189760Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 17:28:53.379836 containerd[1527]: time="2025-12-12T17:28:53.379795800Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.92µs" Dec 12 17:28:53.379836 containerd[1527]: time="2025-12-12T17:28:53.379832880Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:28:53.379909 containerd[1527]: time="2025-12-12T17:28:53.379851040Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:28:53.380036 containerd[1527]: time="2025-12-12T17:28:53.380016200Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:28:53.380074 containerd[1527]: time="2025-12-12T17:28:53.380038280Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:28:53.380074 containerd[1527]: time="2025-12-12T17:28:53.380062560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:28:53.380133 containerd[1527]: time="2025-12-12T17:28:53.380113560Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:28:53.380133 containerd[1527]: time="2025-12-12T17:28:53.380130240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:28:53.380378 containerd[1527]: time="2025-12-12T17:28:53.380356240Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:28:53.380378 containerd[1527]: time="2025-12-12T17:28:53.380376480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:28:53.380419 containerd[1527]: time="2025-12-12T17:28:53.380388120Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:28:53.380419 containerd[1527]: time="2025-12-12T17:28:53.380396160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:28:53.380493 containerd[1527]: time="2025-12-12T17:28:53.380474680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:28:53.380674 containerd[1527]: time="2025-12-12T17:28:53.380654480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:28:53.380700 containerd[1527]: time="2025-12-12T17:28:53.380689480Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:28:53.380720 containerd[1527]: time="2025-12-12T17:28:53.380699480Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:28:53.380753 containerd[1527]: time="2025-12-12T17:28:53.380738920Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:28:53.380970 containerd[1527]: time="2025-12-12T17:28:53.380954280Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:28:53.381051 containerd[1527]: time="2025-12-12T17:28:53.381033440Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:28:53.384909 containerd[1527]: time="2025-12-12T17:28:53.384876400Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:28:53.385393 containerd[1527]: time="2025-12-12T17:28:53.384951240Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:28:53.385393 containerd[1527]: time="2025-12-12T17:28:53.384969160Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:28:53.385393 containerd[1527]: time="2025-12-12T17:28:53.384997000Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:28:53.385393 containerd[1527]: time="2025-12-12T17:28:53.385010000Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:28:53.385393 containerd[1527]: time="2025-12-12T17:28:53.385020040Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:28:53.385393 containerd[1527]: time="2025-12-12T17:28:53.385041960Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:28:53.385393 containerd[1527]: time="2025-12-12T17:28:53.385053960Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:28:53.385393 containerd[1527]: time="2025-12-12T17:28:53.385066720Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:28:53.385393 containerd[1527]: time="2025-12-12T17:28:53.385077120Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:28:53.385393 containerd[1527]: time="2025-12-12T17:28:53.385086320Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:28:53.385393 containerd[1527]: time="2025-12-12T17:28:53.385098000Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:28:53.385393 containerd[1527]: time="2025-12-12T17:28:53.385212800Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:28:53.385393 containerd[1527]: time="2025-12-12T17:28:53.385244800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:28:53.385393 containerd[1527]: time="2025-12-12T17:28:53.385262040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:28:53.385799 containerd[1527]: time="2025-12-12T17:28:53.385281560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:28:53.385799 containerd[1527]: time="2025-12-12T17:28:53.385294560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:28:53.385799 containerd[1527]: time="2025-12-12T17:28:53.385305040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:28:53.385799 containerd[1527]: time="2025-12-12T17:28:53.385316200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:28:53.385799 containerd[1527]: time="2025-12-12T17:28:53.385325840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:28:53.385799 containerd[1527]: time="2025-12-12T17:28:53.385336800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:28:53.385799 containerd[1527]: time="2025-12-12T17:28:53.385346960Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:28:53.385799 containerd[1527]: time="2025-12-12T17:28:53.385357520Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:28:53.385799 containerd[1527]: time="2025-12-12T17:28:53.385543400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:28:53.385799 containerd[1527]: time="2025-12-12T17:28:53.385558360Z" level=info msg="Start snapshots syncer" Dec 12 17:28:53.385799 containerd[1527]: time="2025-12-12T17:28:53.385588880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:28:53.386406 containerd[1527]: time="2025-12-12T17:28:53.385843040Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:28:53.386406 containerd[1527]: time="2025-12-12T17:28:53.385897200Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:28:53.386772 containerd[1527]: time="2025-12-12T17:28:53.385960080Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:28:53.386772 containerd[1527]: time="2025-12-12T17:28:53.386078800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:28:53.386772 containerd[1527]: time="2025-12-12T17:28:53.386101360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:28:53.386772 containerd[1527]: time="2025-12-12T17:28:53.386125960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:28:53.386772 containerd[1527]: time="2025-12-12T17:28:53.386140880Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:28:53.386772 containerd[1527]: time="2025-12-12T17:28:53.386151760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:28:53.386772 containerd[1527]: time="2025-12-12T17:28:53.386162120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:28:53.386772 containerd[1527]: time="2025-12-12T17:28:53.386174320Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:28:53.386772 containerd[1527]: time="2025-12-12T17:28:53.386203640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:28:53.386772 containerd[1527]: time="2025-12-12T17:28:53.386218280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:28:53.386772 containerd[1527]: time="2025-12-12T17:28:53.386241080Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:28:53.386772 containerd[1527]: time="2025-12-12T17:28:53.386284560Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:28:53.386772 containerd[1527]: time="2025-12-12T17:28:53.386306320Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:28:53.386772 containerd[1527]: time="2025-12-12T17:28:53.386315440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:28:53.387032 containerd[1527]: time="2025-12-12T17:28:53.386323960Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:28:53.387032 containerd[1527]: time="2025-12-12T17:28:53.386330920Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:28:53.387032 containerd[1527]: time="2025-12-12T17:28:53.386341720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:28:53.387032 containerd[1527]: time="2025-12-12T17:28:53.386351600Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:28:53.387032 containerd[1527]: time="2025-12-12T17:28:53.386433720Z" level=info msg="runtime interface created" Dec 12 17:28:53.387032 containerd[1527]: time="2025-12-12T17:28:53.386439200Z" level=info msg="created NRI interface" Dec 12 17:28:53.387032 containerd[1527]: time="2025-12-12T17:28:53.386447240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:28:53.387032 containerd[1527]: time="2025-12-12T17:28:53.386458160Z" level=info msg="Connect containerd service" Dec 12 17:28:53.387032 containerd[1527]: time="2025-12-12T17:28:53.386488680Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:28:53.388980 containerd[1527]: time="2025-12-12T17:28:53.387316280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:28:53.457014 containerd[1527]: time="2025-12-12T17:28:53.456953720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:28:53.457115 containerd[1527]: time="2025-12-12T17:28:53.457043280Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:28:53.457115 containerd[1527]: time="2025-12-12T17:28:53.457072960Z" level=info msg="Start subscribing containerd event" Dec 12 17:28:53.457115 containerd[1527]: time="2025-12-12T17:28:53.457111480Z" level=info msg="Start recovering state" Dec 12 17:28:53.457213 containerd[1527]: time="2025-12-12T17:28:53.457192720Z" level=info msg="Start event monitor" Dec 12 17:28:53.457213 containerd[1527]: time="2025-12-12T17:28:53.457211960Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:28:53.457263 containerd[1527]: time="2025-12-12T17:28:53.457219840Z" level=info msg="Start streaming server" Dec 12 17:28:53.457263 containerd[1527]: time="2025-12-12T17:28:53.457237680Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:28:53.457263 containerd[1527]: time="2025-12-12T17:28:53.457247040Z" level=info msg="runtime interface starting up..." Dec 12 17:28:53.457263 containerd[1527]: time="2025-12-12T17:28:53.457252800Z" level=info msg="starting plugins..." Dec 12 17:28:53.457332 containerd[1527]: time="2025-12-12T17:28:53.457267120Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:28:53.457397 containerd[1527]: time="2025-12-12T17:28:53.457381560Z" level=info msg="containerd successfully booted in 0.089519s" Dec 12 17:28:53.457497 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:28:53.482071 sshd_keygen[1520]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:28:53.505040 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:28:53.507541 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:28:53.529949 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:28:53.530224 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:28:53.532844 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:28:53.554993 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:28:53.557837 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:28:53.561211 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:28:53.562505 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:28:53.563101 tar[1522]: linux-arm64/README.md Dec 12 17:28:53.582404 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:28:54.425087 systemd-networkd[1454]: eth0: Gained IPv6LL Dec 12 17:28:54.427717 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:28:54.429495 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:28:54.431800 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 17:28:54.434177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:28:54.451538 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:28:54.467148 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 17:28:54.468019 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 17:28:54.469744 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:28:54.471896 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:28:54.974888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:28:54.976481 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:28:54.979429 (kubelet)[1631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:28:54.982088 systemd[1]: Startup finished in 2.085s (kernel) + 4.782s (initrd) + 3.641s (userspace) = 10.508s. Dec 12 17:28:55.324494 kubelet[1631]: E1212 17:28:55.324376 1631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:28:55.326576 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:28:55.326733 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:28:55.327107 systemd[1]: kubelet.service: Consumed 684ms CPU time, 246.8M memory peak. Dec 12 17:28:59.683356 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:28:59.684388 systemd[1]: Started sshd@0-10.0.0.53:22-10.0.0.1:36652.service - OpenSSH per-connection server daemon (10.0.0.1:36652). Dec 12 17:28:59.770257 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 36652 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:28:59.772153 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:59.778521 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:28:59.779408 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:28:59.785709 systemd-logind[1505]: New session 1 of user core. Dec 12 17:28:59.802334 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:28:59.804947 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:28:59.821360 (systemd)[1650]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:28:59.823688 systemd-logind[1505]: New session c1 of user core. Dec 12 17:28:59.937279 systemd[1650]: Queued start job for default target default.target. Dec 12 17:28:59.960954 systemd[1650]: Created slice app.slice - User Application Slice. Dec 12 17:28:59.961148 systemd[1650]: Reached target paths.target - Paths. Dec 12 17:28:59.961198 systemd[1650]: Reached target timers.target - Timers. Dec 12 17:28:59.962427 systemd[1650]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:28:59.971412 systemd[1650]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:28:59.971468 systemd[1650]: Reached target sockets.target - Sockets. Dec 12 17:28:59.971502 systemd[1650]: Reached target basic.target - Basic System. Dec 12 17:28:59.971539 systemd[1650]: Reached target default.target - Main User Target. Dec 12 17:28:59.971565 systemd[1650]: Startup finished in 141ms. Dec 12 17:28:59.971693 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:28:59.972962 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:29:00.040244 systemd[1]: Started sshd@1-10.0.0.53:22-10.0.0.1:36662.service - OpenSSH per-connection server daemon (10.0.0.1:36662). Dec 12 17:29:00.100206 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 36662 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:29:00.101430 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:00.105368 systemd-logind[1505]: New session 2 of user core. Dec 12 17:29:00.123182 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:29:00.174498 sshd[1664]: Connection closed by 10.0.0.1 port 36662 Dec 12 17:29:00.174968 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:00.184185 systemd[1]: sshd@1-10.0.0.53:22-10.0.0.1:36662.service: Deactivated successfully. Dec 12 17:29:00.186420 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:29:00.187867 systemd-logind[1505]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:29:00.189646 systemd[1]: Started sshd@2-10.0.0.53:22-10.0.0.1:36674.service - OpenSSH per-connection server daemon (10.0.0.1:36674). Dec 12 17:29:00.190583 systemd-logind[1505]: Removed session 2. Dec 12 17:29:00.256924 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 36674 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:29:00.258346 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:00.263051 systemd-logind[1505]: New session 3 of user core. Dec 12 17:29:00.272155 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:29:00.319049 sshd[1673]: Connection closed by 10.0.0.1 port 36674 Dec 12 17:29:00.319584 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:00.339195 systemd[1]: sshd@2-10.0.0.53:22-10.0.0.1:36674.service: Deactivated successfully. Dec 12 17:29:00.341525 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:29:00.342941 systemd-logind[1505]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:29:00.345055 systemd[1]: Started sshd@3-10.0.0.53:22-10.0.0.1:36688.service - OpenSSH per-connection server daemon (10.0.0.1:36688). Dec 12 17:29:00.346066 systemd-logind[1505]: Removed session 3. Dec 12 17:29:00.405711 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 36688 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:29:00.407130 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:00.412716 systemd-logind[1505]: New session 4 of user core. Dec 12 17:29:00.418140 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:29:00.469875 sshd[1683]: Connection closed by 10.0.0.1 port 36688 Dec 12 17:29:00.470360 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:00.491483 systemd[1]: sshd@3-10.0.0.53:22-10.0.0.1:36688.service: Deactivated successfully. Dec 12 17:29:00.494417 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:29:00.495915 systemd-logind[1505]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:29:00.497314 systemd[1]: Started sshd@4-10.0.0.53:22-10.0.0.1:36696.service - OpenSSH per-connection server daemon (10.0.0.1:36696). Dec 12 17:29:00.498072 systemd-logind[1505]: Removed session 4. Dec 12 17:29:00.560824 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 36696 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:29:00.562112 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:00.566004 systemd-logind[1505]: New session 5 of user core. Dec 12 17:29:00.583198 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:29:00.640552 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:29:00.640841 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:29:00.652997 sudo[1693]: pam_unix(sudo:session): session closed for user root Dec 12 17:29:00.655258 sshd[1692]: Connection closed by 10.0.0.1 port 36696 Dec 12 17:29:00.655141 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:00.672307 systemd[1]: sshd@4-10.0.0.53:22-10.0.0.1:36696.service: Deactivated successfully. Dec 12 17:29:00.675530 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:29:00.676301 systemd-logind[1505]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:29:00.678660 systemd[1]: Started sshd@5-10.0.0.53:22-10.0.0.1:36712.service - OpenSSH per-connection server daemon (10.0.0.1:36712). Dec 12 17:29:00.679670 systemd-logind[1505]: Removed session 5. Dec 12 17:29:00.750180 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 36712 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:29:00.751872 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:00.756126 systemd-logind[1505]: New session 6 of user core. Dec 12 17:29:00.764187 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:29:00.815536 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:29:00.815818 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:29:00.889533 sudo[1704]: pam_unix(sudo:session): session closed for user root Dec 12 17:29:00.894704 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:29:00.894991 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:29:00.905159 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:29:00.948879 augenrules[1726]: No rules Dec 12 17:29:00.950335 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:29:00.950577 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:29:00.951508 sudo[1703]: pam_unix(sudo:session): session closed for user root Dec 12 17:29:00.952772 sshd[1702]: Connection closed by 10.0.0.1 port 36712 Dec 12 17:29:00.953204 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:00.961951 systemd[1]: sshd@5-10.0.0.53:22-10.0.0.1:36712.service: Deactivated successfully. Dec 12 17:29:00.965494 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:29:00.966731 systemd-logind[1505]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:29:00.969240 systemd[1]: Started sshd@6-10.0.0.53:22-10.0.0.1:45774.service - OpenSSH per-connection server daemon (10.0.0.1:45774). Dec 12 17:29:00.969903 systemd-logind[1505]: Removed session 6. Dec 12 17:29:01.028948 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 45774 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:29:01.030124 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:01.036368 systemd-logind[1505]: New session 7 of user core. Dec 12 17:29:01.045159 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:29:01.098549 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:29:01.099179 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:29:01.389401 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:29:01.410356 (dockerd)[1760]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:29:01.609889 dockerd[1760]: time="2025-12-12T17:29:01.609822020Z" level=info msg="Starting up" Dec 12 17:29:01.610764 dockerd[1760]: time="2025-12-12T17:29:01.610739429Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:29:01.622237 dockerd[1760]: time="2025-12-12T17:29:01.622192095Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:29:01.658584 dockerd[1760]: time="2025-12-12T17:29:01.658475041Z" level=info msg="Loading containers: start." Dec 12 17:29:01.670002 kernel: Initializing XFRM netlink socket Dec 12 17:29:01.875221 systemd-networkd[1454]: docker0: Link UP Dec 12 17:29:01.915952 dockerd[1760]: time="2025-12-12T17:29:01.915830785Z" level=info msg="Loading containers: done." Dec 12 17:29:01.927125 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3462899657-merged.mount: Deactivated successfully. Dec 12 17:29:01.929847 dockerd[1760]: time="2025-12-12T17:29:01.929794922Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:29:01.929922 dockerd[1760]: time="2025-12-12T17:29:01.929888601Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:29:01.930007 dockerd[1760]: time="2025-12-12T17:29:01.929988701Z" level=info msg="Initializing buildkit" Dec 12 17:29:01.953958 dockerd[1760]: time="2025-12-12T17:29:01.953908311Z" level=info msg="Completed buildkit initialization" Dec 12 17:29:01.959115 dockerd[1760]: time="2025-12-12T17:29:01.959072031Z" level=info msg="Daemon has completed initialization" Dec 12 17:29:01.959603 dockerd[1760]: time="2025-12-12T17:29:01.959142779Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:29:01.959318 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:29:02.361398 containerd[1527]: time="2025-12-12T17:29:02.361143390Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 12 17:29:02.993933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3874690732.mount: Deactivated successfully. Dec 12 17:29:04.143870 containerd[1527]: time="2025-12-12T17:29:04.143788791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:04.145660 containerd[1527]: time="2025-12-12T17:29:04.145610895Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571042" Dec 12 17:29:04.148955 containerd[1527]: time="2025-12-12T17:29:04.148916902Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:04.154369 containerd[1527]: time="2025-12-12T17:29:04.154317147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:04.155268 containerd[1527]: time="2025-12-12T17:29:04.155235983Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 1.794048463s" Dec 12 17:29:04.155317 containerd[1527]: time="2025-12-12T17:29:04.155279496Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Dec 12 17:29:04.155997 containerd[1527]: time="2025-12-12T17:29:04.155902366Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 12 17:29:05.318962 containerd[1527]: time="2025-12-12T17:29:05.318890123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:05.321397 containerd[1527]: time="2025-12-12T17:29:05.321348328Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135479" Dec 12 17:29:05.324411 containerd[1527]: time="2025-12-12T17:29:05.324358325Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:05.329065 containerd[1527]: time="2025-12-12T17:29:05.328992991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:05.330576 containerd[1527]: time="2025-12-12T17:29:05.330514800Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.174581573s" Dec 12 17:29:05.330576 containerd[1527]: time="2025-12-12T17:29:05.330558922Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Dec 12 17:29:05.331172 containerd[1527]: time="2025-12-12T17:29:05.331120776Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 12 17:29:05.577158 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:29:05.578857 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:29:05.741524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:29:05.745683 (kubelet)[2047]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:29:05.795656 kubelet[2047]: E1212 17:29:05.795591 2047 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:29:05.799009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:29:05.799330 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:29:05.799814 systemd[1]: kubelet.service: Consumed 163ms CPU time, 107M memory peak. Dec 12 17:29:06.456401 containerd[1527]: time="2025-12-12T17:29:06.456350637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:06.457405 containerd[1527]: time="2025-12-12T17:29:06.457116422Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191718" Dec 12 17:29:06.458346 containerd[1527]: time="2025-12-12T17:29:06.458303000Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:06.462292 containerd[1527]: time="2025-12-12T17:29:06.462247903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:06.463504 containerd[1527]: time="2025-12-12T17:29:06.463269095Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.1319858s" Dec 12 17:29:06.463504 containerd[1527]: time="2025-12-12T17:29:06.463311390Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Dec 12 17:29:06.463840 containerd[1527]: time="2025-12-12T17:29:06.463811901Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 12 17:29:07.671364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3274708141.mount: Deactivated successfully. Dec 12 17:29:07.992290 containerd[1527]: time="2025-12-12T17:29:07.992174069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:07.992923 containerd[1527]: time="2025-12-12T17:29:07.992903889Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805255" Dec 12 17:29:07.993910 containerd[1527]: time="2025-12-12T17:29:07.993835517Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:07.995596 containerd[1527]: time="2025-12-12T17:29:07.995566153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:07.996130 containerd[1527]: time="2025-12-12T17:29:07.996100835Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.532252748s" Dec 12 17:29:07.996180 containerd[1527]: time="2025-12-12T17:29:07.996134230Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Dec 12 17:29:07.996723 containerd[1527]: time="2025-12-12T17:29:07.996594971Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 12 17:29:08.489351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3909380227.mount: Deactivated successfully. Dec 12 17:29:09.265030 containerd[1527]: time="2025-12-12T17:29:09.264460500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:09.265030 containerd[1527]: time="2025-12-12T17:29:09.264965261Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Dec 12 17:29:09.266006 containerd[1527]: time="2025-12-12T17:29:09.265951128Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:09.269039 containerd[1527]: time="2025-12-12T17:29:09.268988245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:09.270364 containerd[1527]: time="2025-12-12T17:29:09.270319836Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.27368911s" Dec 12 17:29:09.270364 containerd[1527]: time="2025-12-12T17:29:09.270362632Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Dec 12 17:29:09.271079 containerd[1527]: time="2025-12-12T17:29:09.271054641Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 12 17:29:09.701659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount919280028.mount: Deactivated successfully. Dec 12 17:29:09.708452 containerd[1527]: time="2025-12-12T17:29:09.708395693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:09.709074 containerd[1527]: time="2025-12-12T17:29:09.709050020Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Dec 12 17:29:09.709966 containerd[1527]: time="2025-12-12T17:29:09.709926879Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:09.712613 containerd[1527]: time="2025-12-12T17:29:09.712557294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:09.713158 containerd[1527]: time="2025-12-12T17:29:09.713120475Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 442.03127ms" Dec 12 17:29:09.713158 containerd[1527]: time="2025-12-12T17:29:09.713155160Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Dec 12 17:29:09.713651 containerd[1527]: time="2025-12-12T17:29:09.713579204Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 12 17:29:10.206847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3205834817.mount: Deactivated successfully. Dec 12 17:29:12.521101 containerd[1527]: time="2025-12-12T17:29:12.521036670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:12.521537 containerd[1527]: time="2025-12-12T17:29:12.521487959Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062989" Dec 12 17:29:12.522438 containerd[1527]: time="2025-12-12T17:29:12.522397613Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:12.526164 containerd[1527]: time="2025-12-12T17:29:12.526126446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:12.527137 containerd[1527]: time="2025-12-12T17:29:12.527055086Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 2.813447389s" Dec 12 17:29:12.527137 containerd[1527]: time="2025-12-12T17:29:12.527088543Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Dec 12 17:29:16.049539 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 17:29:16.051344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:29:16.207095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:29:16.220257 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:29:16.251573 kubelet[2210]: E1212 17:29:16.251518 2210 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:29:16.253432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:29:16.253629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:29:16.254031 systemd[1]: kubelet.service: Consumed 132ms CPU time, 107.2M memory peak. Dec 12 17:29:16.255676 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:29:16.256042 systemd[1]: kubelet.service: Consumed 132ms CPU time, 107.2M memory peak. Dec 12 17:29:16.258620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:29:16.282616 systemd[1]: Reload requested from client PID 2225 ('systemctl') (unit session-7.scope)... Dec 12 17:29:16.282626 systemd[1]: Reloading... Dec 12 17:29:16.355078 zram_generator::config[2268]: No configuration found. Dec 12 17:29:16.633672 systemd[1]: Reloading finished in 350 ms. Dec 12 17:29:16.689059 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:29:16.691744 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:29:16.692148 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:29:16.692275 systemd[1]: kubelet.service: Consumed 96ms CPU time, 95.2M memory peak. Dec 12 17:29:16.693896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:29:16.809162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:29:16.813236 (kubelet)[2315]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:29:16.850568 kubelet[2315]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:29:16.850568 kubelet[2315]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:29:16.851364 kubelet[2315]: I1212 17:29:16.851312 2315 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:29:17.285394 kubelet[2315]: I1212 17:29:17.285340 2315 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 17:29:17.285394 kubelet[2315]: I1212 17:29:17.285373 2315 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:29:17.285394 kubelet[2315]: I1212 17:29:17.285396 2315 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 17:29:17.285394 kubelet[2315]: I1212 17:29:17.285402 2315 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:29:17.285649 kubelet[2315]: I1212 17:29:17.285635 2315 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:29:17.381896 kubelet[2315]: E1212 17:29:17.381845 2315 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 17:29:17.386639 kubelet[2315]: I1212 17:29:17.386098 2315 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:29:17.394893 kubelet[2315]: I1212 17:29:17.394842 2315 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:29:17.398177 kubelet[2315]: I1212 17:29:17.398143 2315 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 17:29:17.398412 kubelet[2315]: I1212 17:29:17.398383 2315 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:29:17.398567 kubelet[2315]: I1212 17:29:17.398414 2315 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:29:17.398715 kubelet[2315]: I1212 17:29:17.398570 2315 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:29:17.398715 kubelet[2315]: I1212 17:29:17.398578 2315 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 17:29:17.398715 kubelet[2315]: I1212 17:29:17.398688 2315 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 17:29:17.402002 kubelet[2315]: I1212 17:29:17.401962 2315 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:29:17.403213 kubelet[2315]: I1212 17:29:17.403181 2315 kubelet.go:475] "Attempting to sync node with API server" Dec 12 17:29:17.403213 kubelet[2315]: I1212 17:29:17.403209 2315 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:29:17.403857 kubelet[2315]: E1212 17:29:17.403802 2315 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:29:17.406229 kubelet[2315]: I1212 17:29:17.404320 2315 kubelet.go:387] "Adding apiserver pod source" Dec 12 17:29:17.406229 kubelet[2315]: I1212 17:29:17.406052 2315 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:29:17.406811 kubelet[2315]: E1212 17:29:17.406778 2315 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:29:17.408119 kubelet[2315]: I1212 17:29:17.408093 2315 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:29:17.408840 kubelet[2315]: I1212 17:29:17.408796 2315 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:29:17.408840 kubelet[2315]: I1212 17:29:17.408832 2315 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 17:29:17.408923 kubelet[2315]: W1212 17:29:17.408883 2315 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:29:17.420220 kubelet[2315]: I1212 17:29:17.420181 2315 server.go:1262] "Started kubelet" Dec 12 17:29:17.420767 kubelet[2315]: I1212 17:29:17.420682 2315 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:29:17.421362 kubelet[2315]: I1212 17:29:17.421318 2315 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:29:17.421493 kubelet[2315]: I1212 17:29:17.421476 2315 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 17:29:17.421695 kubelet[2315]: I1212 17:29:17.421661 2315 server.go:310] "Adding debug handlers to kubelet server" Dec 12 17:29:17.421947 kubelet[2315]: I1212 17:29:17.421929 2315 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:29:17.423600 kubelet[2315]: I1212 17:29:17.423574 2315 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:29:17.425161 kubelet[2315]: E1212 17:29:17.425122 2315 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:29:17.425161 kubelet[2315]: I1212 17:29:17.425163 2315 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 17:29:17.426068 kubelet[2315]: I1212 17:29:17.425340 2315 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 17:29:17.426068 kubelet[2315]: I1212 17:29:17.425397 2315 reconciler.go:29] "Reconciler: start to sync state" Dec 12 17:29:17.426068 kubelet[2315]: I1212 17:29:17.425489 2315 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:29:17.426068 kubelet[2315]: E1212 17:29:17.425807 2315 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:29:17.427442 kubelet[2315]: E1212 17:29:17.427400 2315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="200ms" Dec 12 17:29:17.429635 kubelet[2315]: I1212 17:29:17.429596 2315 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:29:17.429767 kubelet[2315]: I1212 17:29:17.429744 2315 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:29:17.431435 kubelet[2315]: E1212 17:29:17.425719 2315 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.53:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.53:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188087f8fe779347 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 17:29:17.420122951 +0000 UTC m=+0.602035641,LastTimestamp:2025-12-12 17:29:17.420122951 +0000 UTC m=+0.602035641,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 17:29:17.431618 kubelet[2315]: I1212 17:29:17.431594 2315 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:29:17.432359 kubelet[2315]: E1212 17:29:17.432329 2315 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:29:17.442119 kubelet[2315]: I1212 17:29:17.442085 2315 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:29:17.442119 kubelet[2315]: I1212 17:29:17.442101 2315 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:29:17.442119 kubelet[2315]: I1212 17:29:17.442120 2315 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:29:17.445364 kubelet[2315]: I1212 17:29:17.445328 2315 policy_none.go:49] "None policy: Start" Dec 12 17:29:17.445364 kubelet[2315]: I1212 17:29:17.445355 2315 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 17:29:17.445364 kubelet[2315]: I1212 17:29:17.445369 2315 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 17:29:17.446507 kubelet[2315]: I1212 17:29:17.446482 2315 policy_none.go:47] "Start" Dec 12 17:29:17.451424 kubelet[2315]: I1212 17:29:17.451373 2315 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 17:29:17.451755 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:29:17.452878 kubelet[2315]: I1212 17:29:17.452832 2315 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 17:29:17.452878 kubelet[2315]: I1212 17:29:17.452869 2315 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 17:29:17.452993 kubelet[2315]: I1212 17:29:17.452910 2315 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 17:29:17.452993 kubelet[2315]: E1212 17:29:17.452957 2315 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:29:17.455657 kubelet[2315]: E1212 17:29:17.455457 2315 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:29:17.464172 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:29:17.467232 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:29:17.477957 kubelet[2315]: E1212 17:29:17.477917 2315 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:29:17.478178 kubelet[2315]: I1212 17:29:17.478158 2315 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:29:17.478244 kubelet[2315]: I1212 17:29:17.478177 2315 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:29:17.478775 kubelet[2315]: I1212 17:29:17.478744 2315 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:29:17.479879 kubelet[2315]: E1212 17:29:17.479860 2315 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:29:17.479943 kubelet[2315]: E1212 17:29:17.479907 2315 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 12 17:29:17.568590 systemd[1]: Created slice kubepods-burstable-pod33e43a70b2b2d6d6d24390a720d633c7.slice - libcontainer container kubepods-burstable-pod33e43a70b2b2d6d6d24390a720d633c7.slice. Dec 12 17:29:17.580349 kubelet[2315]: I1212 17:29:17.580298 2315 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:29:17.580777 kubelet[2315]: E1212 17:29:17.580751 2315 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Dec 12 17:29:17.582806 kubelet[2315]: E1212 17:29:17.582775 2315 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:29:17.585541 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Dec 12 17:29:17.596668 kubelet[2315]: E1212 17:29:17.596626 2315 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:29:17.601317 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Dec 12 17:29:17.604305 kubelet[2315]: E1212 17:29:17.604264 2315 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:29:17.628146 kubelet[2315]: E1212 17:29:17.628105 2315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="400ms" Dec 12 17:29:17.727495 kubelet[2315]: I1212 17:29:17.727440 2315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33e43a70b2b2d6d6d24390a720d633c7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"33e43a70b2b2d6d6d24390a720d633c7\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:29:17.727990 kubelet[2315]: I1212 17:29:17.727754 2315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33e43a70b2b2d6d6d24390a720d633c7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"33e43a70b2b2d6d6d24390a720d633c7\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:29:17.727990 kubelet[2315]: I1212 17:29:17.727797 2315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:29:17.727990 kubelet[2315]: I1212 17:29:17.727815 2315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:29:17.727990 kubelet[2315]: I1212 17:29:17.727852 2315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:29:17.727990 kubelet[2315]: I1212 17:29:17.727870 2315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33e43a70b2b2d6d6d24390a720d633c7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"33e43a70b2b2d6d6d24390a720d633c7\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:29:17.728142 kubelet[2315]: I1212 17:29:17.727884 2315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:29:17.728142 kubelet[2315]: I1212 17:29:17.727897 2315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:29:17.728142 kubelet[2315]: I1212 17:29:17.727924 2315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:29:17.783097 kubelet[2315]: I1212 17:29:17.783032 2315 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:29:17.784089 kubelet[2315]: E1212 17:29:17.784059 2315 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Dec 12 17:29:17.947051 containerd[1527]: time="2025-12-12T17:29:17.946922178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:33e43a70b2b2d6d6d24390a720d633c7,Namespace:kube-system,Attempt:0,}" Dec 12 17:29:17.950365 containerd[1527]: time="2025-12-12T17:29:17.950322898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Dec 12 17:29:17.954092 containerd[1527]: time="2025-12-12T17:29:17.953908632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Dec 12 17:29:18.028903 kubelet[2315]: E1212 17:29:18.028852 2315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="800ms" Dec 12 17:29:18.185910 kubelet[2315]: I1212 17:29:18.185867 2315 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:29:18.186292 kubelet[2315]: E1212 17:29:18.186242 2315 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Dec 12 17:29:18.379137 kubelet[2315]: E1212 17:29:18.379020 2315 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:29:18.446667 kubelet[2315]: E1212 17:29:18.446472 2315 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:29:18.465570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2197101563.mount: Deactivated successfully. Dec 12 17:29:18.476671 containerd[1527]: time="2025-12-12T17:29:18.476596628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:29:18.477983 containerd[1527]: time="2025-12-12T17:29:18.477925697Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 12 17:29:18.479807 containerd[1527]: time="2025-12-12T17:29:18.479750214Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:29:18.481708 containerd[1527]: time="2025-12-12T17:29:18.481638151Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:29:18.483248 containerd[1527]: time="2025-12-12T17:29:18.483199508Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:29:18.485627 containerd[1527]: time="2025-12-12T17:29:18.485568577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:29:18.487035 containerd[1527]: time="2025-12-12T17:29:18.486131123Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 536.780522ms" Dec 12 17:29:18.487035 containerd[1527]: time="2025-12-12T17:29:18.486593420Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:29:18.488752 containerd[1527]: time="2025-12-12T17:29:18.488701449Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:29:18.492173 containerd[1527]: time="2025-12-12T17:29:18.492112355Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 536.470775ms" Dec 12 17:29:18.496153 containerd[1527]: time="2025-12-12T17:29:18.496099764Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 542.843181ms" Dec 12 17:29:18.523004 containerd[1527]: time="2025-12-12T17:29:18.522486373Z" level=info msg="connecting to shim 93779aedbc21b122ec42ac1df1d7c3a0f5bee136abb8da56fd4b25c2bdfe04e6" address="unix:///run/containerd/s/344edb5be118628d51ac368c462519a182a8330b7ac2ca5a3f0f594c39b7ce6e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:29:18.523210 containerd[1527]: time="2025-12-12T17:29:18.523184478Z" level=info msg="connecting to shim 985cb10c5032b633a92b48e553857716d2db64a3c9112b768baaee2fa6405bdd" address="unix:///run/containerd/s/62208b40134fb66a4fe83109319437ab6c07ee415a9ec07f84861665dbcca542" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:29:18.532298 containerd[1527]: time="2025-12-12T17:29:18.532252437Z" level=info msg="connecting to shim 180ba482af586acbf79071abd95e3a0d6819e6a96624cd0d6fbc63e0a29338b6" address="unix:///run/containerd/s/e541adf6b4d455a37f4f4412e39e7f70985984143aac7a13147c786fecdb5547" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:29:18.550196 systemd[1]: Started cri-containerd-93779aedbc21b122ec42ac1df1d7c3a0f5bee136abb8da56fd4b25c2bdfe04e6.scope - libcontainer container 93779aedbc21b122ec42ac1df1d7c3a0f5bee136abb8da56fd4b25c2bdfe04e6. Dec 12 17:29:18.553572 systemd[1]: Started cri-containerd-985cb10c5032b633a92b48e553857716d2db64a3c9112b768baaee2fa6405bdd.scope - libcontainer container 985cb10c5032b633a92b48e553857716d2db64a3c9112b768baaee2fa6405bdd. Dec 12 17:29:18.556581 systemd[1]: Started cri-containerd-180ba482af586acbf79071abd95e3a0d6819e6a96624cd0d6fbc63e0a29338b6.scope - libcontainer container 180ba482af586acbf79071abd95e3a0d6819e6a96624cd0d6fbc63e0a29338b6. Dec 12 17:29:18.597336 containerd[1527]: time="2025-12-12T17:29:18.597279471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:33e43a70b2b2d6d6d24390a720d633c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"985cb10c5032b633a92b48e553857716d2db64a3c9112b768baaee2fa6405bdd\"" Dec 12 17:29:18.600826 containerd[1527]: time="2025-12-12T17:29:18.600738203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"93779aedbc21b122ec42ac1df1d7c3a0f5bee136abb8da56fd4b25c2bdfe04e6\"" Dec 12 17:29:18.603912 containerd[1527]: time="2025-12-12T17:29:18.603853920Z" level=info msg="CreateContainer within sandbox \"985cb10c5032b633a92b48e553857716d2db64a3c9112b768baaee2fa6405bdd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:29:18.606279 containerd[1527]: time="2025-12-12T17:29:18.606229227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"180ba482af586acbf79071abd95e3a0d6819e6a96624cd0d6fbc63e0a29338b6\"" Dec 12 17:29:18.607623 containerd[1527]: time="2025-12-12T17:29:18.607579690Z" level=info msg="CreateContainer within sandbox \"93779aedbc21b122ec42ac1df1d7c3a0f5bee136abb8da56fd4b25c2bdfe04e6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:29:18.611364 containerd[1527]: time="2025-12-12T17:29:18.611320214Z" level=info msg="CreateContainer within sandbox \"180ba482af586acbf79071abd95e3a0d6819e6a96624cd0d6fbc63e0a29338b6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:29:18.615872 containerd[1527]: time="2025-12-12T17:29:18.615831061Z" level=info msg="Container f118591151e49b9b6685db3cacc4294f68dd3a01dbac578a4d3e08860f8ab1c5: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:18.624885 containerd[1527]: time="2025-12-12T17:29:18.624824763Z" level=info msg="Container 3d909dd0c797e7d385675681f3a3c907b9fa51c56afc92769b81a4873f82c175: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:18.625201 containerd[1527]: time="2025-12-12T17:29:18.625066888Z" level=info msg="CreateContainer within sandbox \"985cb10c5032b633a92b48e553857716d2db64a3c9112b768baaee2fa6405bdd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f118591151e49b9b6685db3cacc4294f68dd3a01dbac578a4d3e08860f8ab1c5\"" Dec 12 17:29:18.626003 containerd[1527]: time="2025-12-12T17:29:18.625962691Z" level=info msg="StartContainer for \"f118591151e49b9b6685db3cacc4294f68dd3a01dbac578a4d3e08860f8ab1c5\"" Dec 12 17:29:18.627318 containerd[1527]: time="2025-12-12T17:29:18.627248214Z" level=info msg="connecting to shim f118591151e49b9b6685db3cacc4294f68dd3a01dbac578a4d3e08860f8ab1c5" address="unix:///run/containerd/s/62208b40134fb66a4fe83109319437ab6c07ee415a9ec07f84861665dbcca542" protocol=ttrpc version=3 Dec 12 17:29:18.627882 containerd[1527]: time="2025-12-12T17:29:18.627850308Z" level=info msg="Container 56a3c8c9364e3391ca71b4a9c007440b538d7f2799a3da2ea61636410861fe95: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:18.636731 containerd[1527]: time="2025-12-12T17:29:18.636340566Z" level=info msg="CreateContainer within sandbox \"93779aedbc21b122ec42ac1df1d7c3a0f5bee136abb8da56fd4b25c2bdfe04e6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3d909dd0c797e7d385675681f3a3c907b9fa51c56afc92769b81a4873f82c175\"" Dec 12 17:29:18.637172 kubelet[2315]: E1212 17:29:18.637139 2315 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:29:18.639346 containerd[1527]: time="2025-12-12T17:29:18.639305210Z" level=info msg="StartContainer for \"3d909dd0c797e7d385675681f3a3c907b9fa51c56afc92769b81a4873f82c175\"" Dec 12 17:29:18.640907 containerd[1527]: time="2025-12-12T17:29:18.640624243Z" level=info msg="connecting to shim 3d909dd0c797e7d385675681f3a3c907b9fa51c56afc92769b81a4873f82c175" address="unix:///run/containerd/s/344edb5be118628d51ac368c462519a182a8330b7ac2ca5a3f0f594c39b7ce6e" protocol=ttrpc version=3 Dec 12 17:29:18.641901 containerd[1527]: time="2025-12-12T17:29:18.641702110Z" level=info msg="CreateContainer within sandbox \"180ba482af586acbf79071abd95e3a0d6819e6a96624cd0d6fbc63e0a29338b6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"56a3c8c9364e3391ca71b4a9c007440b538d7f2799a3da2ea61636410861fe95\"" Dec 12 17:29:18.643755 containerd[1527]: time="2025-12-12T17:29:18.643725965Z" level=info msg="StartContainer for \"56a3c8c9364e3391ca71b4a9c007440b538d7f2799a3da2ea61636410861fe95\"" Dec 12 17:29:18.645955 containerd[1527]: time="2025-12-12T17:29:18.645919647Z" level=info msg="connecting to shim 56a3c8c9364e3391ca71b4a9c007440b538d7f2799a3da2ea61636410861fe95" address="unix:///run/containerd/s/e541adf6b4d455a37f4f4412e39e7f70985984143aac7a13147c786fecdb5547" protocol=ttrpc version=3 Dec 12 17:29:18.651189 systemd[1]: Started cri-containerd-f118591151e49b9b6685db3cacc4294f68dd3a01dbac578a4d3e08860f8ab1c5.scope - libcontainer container f118591151e49b9b6685db3cacc4294f68dd3a01dbac578a4d3e08860f8ab1c5. Dec 12 17:29:18.667193 systemd[1]: Started cri-containerd-3d909dd0c797e7d385675681f3a3c907b9fa51c56afc92769b81a4873f82c175.scope - libcontainer container 3d909dd0c797e7d385675681f3a3c907b9fa51c56afc92769b81a4873f82c175. Dec 12 17:29:18.670770 systemd[1]: Started cri-containerd-56a3c8c9364e3391ca71b4a9c007440b538d7f2799a3da2ea61636410861fe95.scope - libcontainer container 56a3c8c9364e3391ca71b4a9c007440b538d7f2799a3da2ea61636410861fe95. Dec 12 17:29:18.708546 containerd[1527]: time="2025-12-12T17:29:18.708469406Z" level=info msg="StartContainer for \"f118591151e49b9b6685db3cacc4294f68dd3a01dbac578a4d3e08860f8ab1c5\" returns successfully" Dec 12 17:29:18.727364 containerd[1527]: time="2025-12-12T17:29:18.727323983Z" level=info msg="StartContainer for \"56a3c8c9364e3391ca71b4a9c007440b538d7f2799a3da2ea61636410861fe95\" returns successfully" Dec 12 17:29:18.727651 containerd[1527]: time="2025-12-12T17:29:18.727620011Z" level=info msg="StartContainer for \"3d909dd0c797e7d385675681f3a3c907b9fa51c56afc92769b81a4873f82c175\" returns successfully" Dec 12 17:29:18.781539 kubelet[2315]: E1212 17:29:18.781494 2315 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:29:18.989537 kubelet[2315]: I1212 17:29:18.989434 2315 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:29:19.468036 kubelet[2315]: E1212 17:29:19.467934 2315 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:29:19.473460 kubelet[2315]: E1212 17:29:19.473425 2315 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:29:19.477129 kubelet[2315]: E1212 17:29:19.477102 2315 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:29:20.479112 kubelet[2315]: E1212 17:29:20.479071 2315 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:29:20.479432 kubelet[2315]: E1212 17:29:20.479407 2315 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:29:20.703452 kubelet[2315]: E1212 17:29:20.702502 2315 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 12 17:29:20.782584 kubelet[2315]: I1212 17:29:20.782456 2315 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:29:20.782584 kubelet[2315]: E1212 17:29:20.782501 2315 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 12 17:29:20.796807 kubelet[2315]: E1212 17:29:20.796749 2315 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:29:20.897764 kubelet[2315]: E1212 17:29:20.897709 2315 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:29:20.998567 kubelet[2315]: E1212 17:29:20.998513 2315 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:29:21.098737 kubelet[2315]: E1212 17:29:21.098609 2315 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:29:21.199321 kubelet[2315]: E1212 17:29:21.199273 2315 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:29:21.299902 kubelet[2315]: E1212 17:29:21.299854 2315 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:29:21.400796 kubelet[2315]: E1212 17:29:21.400665 2315 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:29:21.501041 kubelet[2315]: E1212 17:29:21.501006 2315 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:29:21.601745 kubelet[2315]: E1212 17:29:21.601673 2315 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:29:21.627805 kubelet[2315]: I1212 17:29:21.627754 2315 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:29:21.633704 kubelet[2315]: E1212 17:29:21.633660 2315 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 17:29:21.633704 kubelet[2315]: I1212 17:29:21.633697 2315 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:29:21.636016 kubelet[2315]: E1212 17:29:21.635969 2315 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 17:29:21.636016 kubelet[2315]: I1212 17:29:21.636010 2315 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:29:21.638092 kubelet[2315]: E1212 17:29:21.638034 2315 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:29:22.410053 kubelet[2315]: I1212 17:29:22.409950 2315 apiserver.go:52] "Watching apiserver" Dec 12 17:29:22.426349 kubelet[2315]: I1212 17:29:22.426311 2315 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 17:29:22.759304 systemd[1]: Reload requested from client PID 2602 ('systemctl') (unit session-7.scope)... Dec 12 17:29:22.759325 systemd[1]: Reloading... Dec 12 17:29:22.853032 zram_generator::config[2648]: No configuration found. Dec 12 17:29:23.042063 systemd[1]: Reloading finished in 282 ms. Dec 12 17:29:23.065522 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:29:23.085180 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:29:23.085625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:29:23.085686 systemd[1]: kubelet.service: Consumed 853ms CPU time, 121.8M memory peak. Dec 12 17:29:23.087710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:29:23.268836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:29:23.285515 (kubelet)[2687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:29:23.329548 kubelet[2687]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:29:23.329548 kubelet[2687]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:29:23.329548 kubelet[2687]: I1212 17:29:23.328685 2687 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:29:23.336067 kubelet[2687]: I1212 17:29:23.336015 2687 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 17:29:23.336067 kubelet[2687]: I1212 17:29:23.336052 2687 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:29:23.336224 kubelet[2687]: I1212 17:29:23.336087 2687 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 17:29:23.336224 kubelet[2687]: I1212 17:29:23.336094 2687 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:29:23.336368 kubelet[2687]: I1212 17:29:23.336329 2687 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:29:23.337710 kubelet[2687]: I1212 17:29:23.337681 2687 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 17:29:23.340037 kubelet[2687]: I1212 17:29:23.340004 2687 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:29:23.346000 kubelet[2687]: I1212 17:29:23.344849 2687 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:29:23.347990 kubelet[2687]: I1212 17:29:23.347933 2687 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 17:29:23.348371 kubelet[2687]: I1212 17:29:23.348320 2687 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:29:23.348681 kubelet[2687]: I1212 17:29:23.348440 2687 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:29:23.348949 kubelet[2687]: I1212 17:29:23.348934 2687 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:29:23.349055 kubelet[2687]: I1212 17:29:23.349032 2687 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 17:29:23.349146 kubelet[2687]: I1212 17:29:23.349136 2687 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 17:29:23.350475 kubelet[2687]: I1212 17:29:23.350450 2687 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:29:23.350738 kubelet[2687]: I1212 17:29:23.350721 2687 kubelet.go:475] "Attempting to sync node with API server" Dec 12 17:29:23.350820 kubelet[2687]: I1212 17:29:23.350810 2687 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:29:23.350908 kubelet[2687]: I1212 17:29:23.350899 2687 kubelet.go:387] "Adding apiserver pod source" Dec 12 17:29:23.350957 kubelet[2687]: I1212 17:29:23.350949 2687 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:29:23.351903 kubelet[2687]: I1212 17:29:23.351885 2687 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:29:23.352608 kubelet[2687]: I1212 17:29:23.352587 2687 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:29:23.352705 kubelet[2687]: I1212 17:29:23.352695 2687 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 17:29:23.355298 kubelet[2687]: I1212 17:29:23.355274 2687 server.go:1262] "Started kubelet" Dec 12 17:29:23.358514 kubelet[2687]: I1212 17:29:23.356159 2687 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:29:23.358514 kubelet[2687]: I1212 17:29:23.356244 2687 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 17:29:23.358514 kubelet[2687]: I1212 17:29:23.356509 2687 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:29:23.358514 kubelet[2687]: I1212 17:29:23.356583 2687 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:29:23.360253 kubelet[2687]: I1212 17:29:23.360233 2687 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:29:23.363326 kubelet[2687]: I1212 17:29:23.363292 2687 server.go:310] "Adding debug handlers to kubelet server" Dec 12 17:29:23.365834 kubelet[2687]: I1212 17:29:23.365784 2687 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:29:23.371776 kubelet[2687]: I1212 17:29:23.371722 2687 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 17:29:23.372096 kubelet[2687]: I1212 17:29:23.371923 2687 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 17:29:23.372096 kubelet[2687]: E1212 17:29:23.371963 2687 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:29:23.372169 kubelet[2687]: I1212 17:29:23.372101 2687 reconciler.go:29] "Reconciler: start to sync state" Dec 12 17:29:23.376378 kubelet[2687]: E1212 17:29:23.376312 2687 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:29:23.377731 kubelet[2687]: I1212 17:29:23.377658 2687 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:29:23.377731 kubelet[2687]: I1212 17:29:23.377721 2687 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:29:23.378034 kubelet[2687]: I1212 17:29:23.377905 2687 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:29:23.387160 kubelet[2687]: I1212 17:29:23.387114 2687 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 17:29:23.388587 kubelet[2687]: I1212 17:29:23.388533 2687 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 17:29:23.388587 kubelet[2687]: I1212 17:29:23.388575 2687 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 17:29:23.388729 kubelet[2687]: I1212 17:29:23.388605 2687 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 17:29:23.388729 kubelet[2687]: E1212 17:29:23.388668 2687 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:29:23.422293 kubelet[2687]: I1212 17:29:23.422251 2687 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:29:23.422293 kubelet[2687]: I1212 17:29:23.422276 2687 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:29:23.422293 kubelet[2687]: I1212 17:29:23.422301 2687 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:29:23.422472 kubelet[2687]: I1212 17:29:23.422460 2687 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:29:23.422507 kubelet[2687]: I1212 17:29:23.422471 2687 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:29:23.422507 kubelet[2687]: I1212 17:29:23.422488 2687 policy_none.go:49] "None policy: Start" Dec 12 17:29:23.422507 kubelet[2687]: I1212 17:29:23.422496 2687 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 17:29:23.422507 kubelet[2687]: I1212 17:29:23.422505 2687 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 17:29:23.422890 kubelet[2687]: I1212 17:29:23.422609 2687 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 12 17:29:23.422890 kubelet[2687]: I1212 17:29:23.422625 2687 policy_none.go:47] "Start" Dec 12 17:29:23.429276 kubelet[2687]: E1212 17:29:23.429245 2687 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:29:23.429577 kubelet[2687]: I1212 17:29:23.429474 2687 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:29:23.429577 kubelet[2687]: I1212 17:29:23.429490 2687 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:29:23.429750 kubelet[2687]: I1212 17:29:23.429681 2687 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:29:23.431833 kubelet[2687]: E1212 17:29:23.431013 2687 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:29:23.489718 kubelet[2687]: I1212 17:29:23.489668 2687 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:29:23.490223 kubelet[2687]: I1212 17:29:23.489691 2687 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:29:23.490223 kubelet[2687]: I1212 17:29:23.489739 2687 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:29:23.533679 kubelet[2687]: I1212 17:29:23.533636 2687 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:29:23.541786 kubelet[2687]: I1212 17:29:23.541374 2687 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 12 17:29:23.541786 kubelet[2687]: I1212 17:29:23.541477 2687 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:29:23.573405 kubelet[2687]: I1212 17:29:23.573358 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33e43a70b2b2d6d6d24390a720d633c7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"33e43a70b2b2d6d6d24390a720d633c7\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:29:23.573526 kubelet[2687]: I1212 17:29:23.573423 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:29:23.573526 kubelet[2687]: I1212 17:29:23.573486 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:29:23.573526 kubelet[2687]: I1212 17:29:23.573509 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:29:23.573626 kubelet[2687]: I1212 17:29:23.573542 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33e43a70b2b2d6d6d24390a720d633c7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"33e43a70b2b2d6d6d24390a720d633c7\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:29:23.573626 kubelet[2687]: I1212 17:29:23.573561 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33e43a70b2b2d6d6d24390a720d633c7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"33e43a70b2b2d6d6d24390a720d633c7\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:29:23.573626 kubelet[2687]: I1212 17:29:23.573592 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:29:23.573734 kubelet[2687]: I1212 17:29:23.573625 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:29:23.573734 kubelet[2687]: I1212 17:29:23.573641 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:29:23.845620 sudo[2726]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 12 17:29:23.845903 sudo[2726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 12 17:29:24.173325 sudo[2726]: pam_unix(sudo:session): session closed for user root Dec 12 17:29:24.351940 kubelet[2687]: I1212 17:29:24.351888 2687 apiserver.go:52] "Watching apiserver" Dec 12 17:29:24.372219 kubelet[2687]: I1212 17:29:24.372148 2687 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 17:29:24.403559 kubelet[2687]: I1212 17:29:24.403518 2687 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:29:24.404228 kubelet[2687]: I1212 17:29:24.403753 2687 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:29:24.410768 kubelet[2687]: E1212 17:29:24.410446 2687 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 12 17:29:24.411113 kubelet[2687]: E1212 17:29:24.411086 2687 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 17:29:24.439749 kubelet[2687]: I1212 17:29:24.439386 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.436541685 podStartE2EDuration="1.436541685s" podCreationTimestamp="2025-12-12 17:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:29:24.427878245 +0000 UTC m=+1.138940241" watchObservedRunningTime="2025-12-12 17:29:24.436541685 +0000 UTC m=+1.147603641" Dec 12 17:29:24.439749 kubelet[2687]: I1212 17:29:24.439559 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.439550908 podStartE2EDuration="1.439550908s" podCreationTimestamp="2025-12-12 17:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:29:24.4364319 +0000 UTC m=+1.147493896" watchObservedRunningTime="2025-12-12 17:29:24.439550908 +0000 UTC m=+1.150612864" Dec 12 17:29:24.446751 kubelet[2687]: I1212 17:29:24.446563 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.446547418 podStartE2EDuration="1.446547418s" podCreationTimestamp="2025-12-12 17:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:29:24.44595122 +0000 UTC m=+1.157013256" watchObservedRunningTime="2025-12-12 17:29:24.446547418 +0000 UTC m=+1.157609414" Dec 12 17:29:26.357351 sudo[1739]: pam_unix(sudo:session): session closed for user root Dec 12 17:29:26.358810 sshd[1738]: Connection closed by 10.0.0.1 port 45774 Dec 12 17:29:26.359367 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:26.363699 systemd-logind[1505]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:29:26.364470 systemd[1]: sshd@6-10.0.0.53:22-10.0.0.1:45774.service: Deactivated successfully. Dec 12 17:29:26.366655 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:29:26.367027 systemd[1]: session-7.scope: Consumed 6.358s CPU time, 258.9M memory peak. Dec 12 17:29:26.370634 systemd-logind[1505]: Removed session 7. Dec 12 17:29:27.867603 kubelet[2687]: I1212 17:29:27.867567 2687 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:29:27.867993 containerd[1527]: time="2025-12-12T17:29:27.867926929Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:29:27.869393 kubelet[2687]: I1212 17:29:27.869366 2687 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:29:28.925829 systemd[1]: Created slice kubepods-besteffort-pod80663f60_c201_4529_9727_4168c81321b2.slice - libcontainer container kubepods-besteffort-pod80663f60_c201_4529_9727_4168c81321b2.slice. Dec 12 17:29:28.945333 systemd[1]: Created slice kubepods-burstable-pod250c4f7b_fa61_4469_b3af_2ac66ad11387.slice - libcontainer container kubepods-burstable-pod250c4f7b_fa61_4469_b3af_2ac66ad11387.slice. Dec 12 17:29:29.005242 kubelet[2687]: I1212 17:29:29.005186 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-xtables-lock\") pod \"cilium-pxgrl\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " pod="kube-system/cilium-pxgrl" Dec 12 17:29:29.005242 kubelet[2687]: I1212 17:29:29.005230 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/250c4f7b-fa61-4469-b3af-2ac66ad11387-cilium-config-path\") pod \"cilium-pxgrl\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " pod="kube-system/cilium-pxgrl" Dec 12 17:29:29.005242 kubelet[2687]: I1212 17:29:29.005245 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-host-proc-sys-net\") pod \"cilium-pxgrl\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " pod="kube-system/cilium-pxgrl" Dec 12 17:29:29.005641 kubelet[2687]: I1212 17:29:29.005260 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-host-proc-sys-kernel\") pod \"cilium-pxgrl\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " pod="kube-system/cilium-pxgrl" Dec 12 17:29:29.005641 kubelet[2687]: I1212 17:29:29.005278 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-cilium-run\") pod \"cilium-pxgrl\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " pod="kube-system/cilium-pxgrl" Dec 12 17:29:29.005641 kubelet[2687]: I1212 17:29:29.005293 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-etc-cni-netd\") pod \"cilium-pxgrl\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " pod="kube-system/cilium-pxgrl" Dec 12 17:29:29.005641 kubelet[2687]: I1212 17:29:29.005306 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/250c4f7b-fa61-4469-b3af-2ac66ad11387-clustermesh-secrets\") pod \"cilium-pxgrl\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " pod="kube-system/cilium-pxgrl" Dec 12 17:29:29.005641 kubelet[2687]: I1212 17:29:29.005319 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/250c4f7b-fa61-4469-b3af-2ac66ad11387-hubble-tls\") pod \"cilium-pxgrl\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " pod="kube-system/cilium-pxgrl" Dec 12 17:29:29.005641 kubelet[2687]: I1212 17:29:29.005334 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80663f60-c201-4529-9727-4168c81321b2-lib-modules\") pod \"kube-proxy-4zr8c\" (UID: \"80663f60-c201-4529-9727-4168c81321b2\") " pod="kube-system/kube-proxy-4zr8c" Dec 12 17:29:29.005763 kubelet[2687]: I1212 17:29:29.005347 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-hostproc\") pod \"cilium-pxgrl\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " pod="kube-system/cilium-pxgrl" Dec 12 17:29:29.005763 kubelet[2687]: I1212 17:29:29.005360 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-cni-path\") pod \"cilium-pxgrl\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " pod="kube-system/cilium-pxgrl" Dec 12 17:29:29.005763 kubelet[2687]: I1212 17:29:29.005375 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-lib-modules\") pod \"cilium-pxgrl\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " pod="kube-system/cilium-pxgrl" Dec 12 17:29:29.005763 kubelet[2687]: I1212 17:29:29.005388 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqzkz\" (UniqueName: \"kubernetes.io/projected/250c4f7b-fa61-4469-b3af-2ac66ad11387-kube-api-access-rqzkz\") pod \"cilium-pxgrl\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " pod="kube-system/cilium-pxgrl" Dec 12 17:29:29.005763 kubelet[2687]: I1212 17:29:29.005402 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/80663f60-c201-4529-9727-4168c81321b2-kube-proxy\") pod \"kube-proxy-4zr8c\" (UID: \"80663f60-c201-4529-9727-4168c81321b2\") " pod="kube-system/kube-proxy-4zr8c" Dec 12 17:29:29.005763 kubelet[2687]: I1212 17:29:29.005432 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80663f60-c201-4529-9727-4168c81321b2-xtables-lock\") pod \"kube-proxy-4zr8c\" (UID: \"80663f60-c201-4529-9727-4168c81321b2\") " pod="kube-system/kube-proxy-4zr8c" Dec 12 17:29:29.005900 kubelet[2687]: I1212 17:29:29.005446 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-cilium-cgroup\") pod \"cilium-pxgrl\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " pod="kube-system/cilium-pxgrl" Dec 12 17:29:29.005900 kubelet[2687]: I1212 17:29:29.005461 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kbbc\" (UniqueName: \"kubernetes.io/projected/80663f60-c201-4529-9727-4168c81321b2-kube-api-access-9kbbc\") pod \"kube-proxy-4zr8c\" (UID: \"80663f60-c201-4529-9727-4168c81321b2\") " pod="kube-system/kube-proxy-4zr8c" Dec 12 17:29:29.005900 kubelet[2687]: I1212 17:29:29.005479 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-bpf-maps\") pod \"cilium-pxgrl\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " pod="kube-system/cilium-pxgrl" Dec 12 17:29:29.087400 systemd[1]: Created slice kubepods-besteffort-pod23d4d2f2_63bf_4628_afe1_d8900fae32e7.slice - libcontainer container kubepods-besteffort-pod23d4d2f2_63bf_4628_afe1_d8900fae32e7.slice. Dec 12 17:29:29.107545 kubelet[2687]: I1212 17:29:29.106609 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82tmn\" (UniqueName: \"kubernetes.io/projected/23d4d2f2-63bf-4628-afe1-d8900fae32e7-kube-api-access-82tmn\") pod \"cilium-operator-6f9c7c5859-fs7fs\" (UID: \"23d4d2f2-63bf-4628-afe1-d8900fae32e7\") " pod="kube-system/cilium-operator-6f9c7c5859-fs7fs" Dec 12 17:29:29.107545 kubelet[2687]: I1212 17:29:29.106719 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23d4d2f2-63bf-4628-afe1-d8900fae32e7-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-fs7fs\" (UID: \"23d4d2f2-63bf-4628-afe1-d8900fae32e7\") " pod="kube-system/cilium-operator-6f9c7c5859-fs7fs" Dec 12 17:29:29.244124 containerd[1527]: time="2025-12-12T17:29:29.244027090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4zr8c,Uid:80663f60-c201-4529-9727-4168c81321b2,Namespace:kube-system,Attempt:0,}" Dec 12 17:29:29.251085 containerd[1527]: time="2025-12-12T17:29:29.251045600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pxgrl,Uid:250c4f7b-fa61-4469-b3af-2ac66ad11387,Namespace:kube-system,Attempt:0,}" Dec 12 17:29:29.262560 containerd[1527]: time="2025-12-12T17:29:29.262519939Z" level=info msg="connecting to shim dbec14b57cb71af9ed6861484423a068e9b1196298ba4737abfb0bd5ceb95703" address="unix:///run/containerd/s/0359a6f0ab35b20593708a27c863474fd76c2bf837024e29be68daa012a58ff7" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:29:29.269008 containerd[1527]: time="2025-12-12T17:29:29.268892279Z" level=info msg="connecting to shim 4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928" address="unix:///run/containerd/s/17403a865aadbabfe10c83d23d88363d7913a4f7c7a53c6781e3c8752c3f5634" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:29:29.285138 systemd[1]: Started cri-containerd-dbec14b57cb71af9ed6861484423a068e9b1196298ba4737abfb0bd5ceb95703.scope - libcontainer container dbec14b57cb71af9ed6861484423a068e9b1196298ba4737abfb0bd5ceb95703. Dec 12 17:29:29.289179 systemd[1]: Started cri-containerd-4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928.scope - libcontainer container 4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928. Dec 12 17:29:29.315134 containerd[1527]: time="2025-12-12T17:29:29.315083142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4zr8c,Uid:80663f60-c201-4529-9727-4168c81321b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbec14b57cb71af9ed6861484423a068e9b1196298ba4737abfb0bd5ceb95703\"" Dec 12 17:29:29.323858 containerd[1527]: time="2025-12-12T17:29:29.323756333Z" level=info msg="CreateContainer within sandbox \"dbec14b57cb71af9ed6861484423a068e9b1196298ba4737abfb0bd5ceb95703\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:29:29.325358 containerd[1527]: time="2025-12-12T17:29:29.325316860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pxgrl,Uid:250c4f7b-fa61-4469-b3af-2ac66ad11387,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928\"" Dec 12 17:29:29.330408 containerd[1527]: time="2025-12-12T17:29:29.330073116Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 12 17:29:29.335557 containerd[1527]: time="2025-12-12T17:29:29.335519779Z" level=info msg="Container f8569749d5449bff473e80a68ec7ea6ee61310341fbe9ccff9d4d30672bf0dd1: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:29.346132 containerd[1527]: time="2025-12-12T17:29:29.346081722Z" level=info msg="CreateContainer within sandbox \"dbec14b57cb71af9ed6861484423a068e9b1196298ba4737abfb0bd5ceb95703\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f8569749d5449bff473e80a68ec7ea6ee61310341fbe9ccff9d4d30672bf0dd1\"" Dec 12 17:29:29.346800 containerd[1527]: time="2025-12-12T17:29:29.346734091Z" level=info msg="StartContainer for \"f8569749d5449bff473e80a68ec7ea6ee61310341fbe9ccff9d4d30672bf0dd1\"" Dec 12 17:29:29.348993 containerd[1527]: time="2025-12-12T17:29:29.348940387Z" level=info msg="connecting to shim f8569749d5449bff473e80a68ec7ea6ee61310341fbe9ccff9d4d30672bf0dd1" address="unix:///run/containerd/s/0359a6f0ab35b20593708a27c863474fd76c2bf837024e29be68daa012a58ff7" protocol=ttrpc version=3 Dec 12 17:29:29.367189 systemd[1]: Started cri-containerd-f8569749d5449bff473e80a68ec7ea6ee61310341fbe9ccff9d4d30672bf0dd1.scope - libcontainer container f8569749d5449bff473e80a68ec7ea6ee61310341fbe9ccff9d4d30672bf0dd1. Dec 12 17:29:29.394418 containerd[1527]: time="2025-12-12T17:29:29.394107019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-fs7fs,Uid:23d4d2f2-63bf-4628-afe1-d8900fae32e7,Namespace:kube-system,Attempt:0,}" Dec 12 17:29:29.411610 containerd[1527]: time="2025-12-12T17:29:29.411565316Z" level=info msg="connecting to shim 19d18d1287e45d41e7b84c343fc31390472cf6a2e12e608bc748d2b68429e357" address="unix:///run/containerd/s/4d6a99492afcbbd02a3a812230275b1d38d886699b8a7b9e763f2ab68485f049" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:29:29.440190 systemd[1]: Started cri-containerd-19d18d1287e45d41e7b84c343fc31390472cf6a2e12e608bc748d2b68429e357.scope - libcontainer container 19d18d1287e45d41e7b84c343fc31390472cf6a2e12e608bc748d2b68429e357. Dec 12 17:29:29.450018 containerd[1527]: time="2025-12-12T17:29:29.449083228Z" level=info msg="StartContainer for \"f8569749d5449bff473e80a68ec7ea6ee61310341fbe9ccff9d4d30672bf0dd1\" returns successfully" Dec 12 17:29:29.479956 containerd[1527]: time="2025-12-12T17:29:29.479906176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-fs7fs,Uid:23d4d2f2-63bf-4628-afe1-d8900fae32e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"19d18d1287e45d41e7b84c343fc31390472cf6a2e12e608bc748d2b68429e357\"" Dec 12 17:29:32.618312 kubelet[2687]: I1212 17:29:32.618078 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4zr8c" podStartSLOduration=4.618056664 podStartE2EDuration="4.618056664s" podCreationTimestamp="2025-12-12 17:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:29:30.443270074 +0000 UTC m=+7.154332190" watchObservedRunningTime="2025-12-12 17:29:32.618056664 +0000 UTC m=+9.329118660" Dec 12 17:29:36.014259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3693807248.mount: Deactivated successfully. Dec 12 17:29:37.329884 containerd[1527]: time="2025-12-12T17:29:37.329829793Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:37.330988 containerd[1527]: time="2025-12-12T17:29:37.330875961Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Dec 12 17:29:37.331984 containerd[1527]: time="2025-12-12T17:29:37.331943689Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:37.333515 containerd[1527]: time="2025-12-12T17:29:37.333408524Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.00328833s" Dec 12 17:29:37.333515 containerd[1527]: time="2025-12-12T17:29:37.333439643Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 12 17:29:37.335692 containerd[1527]: time="2025-12-12T17:29:37.335654935Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 12 17:29:37.338276 containerd[1527]: time="2025-12-12T17:29:37.338246696Z" level=info msg="CreateContainer within sandbox \"4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 17:29:37.355366 containerd[1527]: time="2025-12-12T17:29:37.355332852Z" level=info msg="Container 586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:37.358268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3140460545.mount: Deactivated successfully. Dec 12 17:29:37.363925 containerd[1527]: time="2025-12-12T17:29:37.363888590Z" level=info msg="CreateContainer within sandbox \"4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755\"" Dec 12 17:29:37.364447 containerd[1527]: time="2025-12-12T17:29:37.364426454Z" level=info msg="StartContainer for \"586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755\"" Dec 12 17:29:37.365401 containerd[1527]: time="2025-12-12T17:29:37.365367585Z" level=info msg="connecting to shim 586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755" address="unix:///run/containerd/s/17403a865aadbabfe10c83d23d88363d7913a4f7c7a53c6781e3c8752c3f5634" protocol=ttrpc version=3 Dec 12 17:29:37.405959 systemd[1]: Started cri-containerd-586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755.scope - libcontainer container 586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755. Dec 12 17:29:37.437564 containerd[1527]: time="2025-12-12T17:29:37.437423579Z" level=info msg="StartContainer for \"586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755\" returns successfully" Dec 12 17:29:37.450890 systemd[1]: cri-containerd-586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755.scope: Deactivated successfully. Dec 12 17:29:37.452115 systemd[1]: cri-containerd-586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755.scope: Consumed 23ms CPU time, 5.6M memory peak, 3.1M written to disk. Dec 12 17:29:37.521878 containerd[1527]: time="2025-12-12T17:29:37.521819995Z" level=info msg="received container exit event container_id:\"586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755\" id:\"586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755\" pid:3115 exited_at:{seconds:1765560577 nanos:512962306}" Dec 12 17:29:37.567123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755-rootfs.mount: Deactivated successfully. Dec 12 17:29:38.455082 containerd[1527]: time="2025-12-12T17:29:38.455043473Z" level=info msg="CreateContainer within sandbox \"4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 17:29:38.462875 containerd[1527]: time="2025-12-12T17:29:38.462845886Z" level=info msg="Container fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:38.469982 containerd[1527]: time="2025-12-12T17:29:38.469931720Z" level=info msg="CreateContainer within sandbox \"4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68\"" Dec 12 17:29:38.470569 containerd[1527]: time="2025-12-12T17:29:38.470541662Z" level=info msg="StartContainer for \"fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68\"" Dec 12 17:29:38.472032 containerd[1527]: time="2025-12-12T17:29:38.471909422Z" level=info msg="connecting to shim fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68" address="unix:///run/containerd/s/17403a865aadbabfe10c83d23d88363d7913a4f7c7a53c6781e3c8752c3f5634" protocol=ttrpc version=3 Dec 12 17:29:38.475566 update_engine[1508]: I20251212 17:29:38.475512 1508 update_attempter.cc:509] Updating boot flags... Dec 12 17:29:38.492586 systemd[1]: Started cri-containerd-fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68.scope - libcontainer container fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68. Dec 12 17:29:38.615554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4084879403.mount: Deactivated successfully. Dec 12 17:29:38.636689 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:29:38.636892 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:29:38.637085 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:29:38.638739 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:29:38.640702 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 17:29:38.641085 systemd[1]: cri-containerd-fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68.scope: Deactivated successfully. Dec 12 17:29:38.642166 containerd[1527]: time="2025-12-12T17:29:38.642124430Z" level=info msg="received container exit event container_id:\"fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68\" id:\"fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68\" pid:3168 exited_at:{seconds:1765560578 nanos:641870398}" Dec 12 17:29:38.661338 containerd[1527]: time="2025-12-12T17:29:38.661298552Z" level=info msg="StartContainer for \"fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68\" returns successfully" Dec 12 17:29:38.674169 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:29:39.064096 containerd[1527]: time="2025-12-12T17:29:39.064048365Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:39.065424 containerd[1527]: time="2025-12-12T17:29:39.065379888Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Dec 12 17:29:39.066914 containerd[1527]: time="2025-12-12T17:29:39.066862687Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:39.068530 containerd[1527]: time="2025-12-12T17:29:39.068486802Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.732797709s" Dec 12 17:29:39.068530 containerd[1527]: time="2025-12-12T17:29:39.068529241Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 12 17:29:39.072047 containerd[1527]: time="2025-12-12T17:29:39.072014825Z" level=info msg="CreateContainer within sandbox \"19d18d1287e45d41e7b84c343fc31390472cf6a2e12e608bc748d2b68429e357\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 12 17:29:39.077383 containerd[1527]: time="2025-12-12T17:29:39.077352877Z" level=info msg="Container 5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:39.083012 containerd[1527]: time="2025-12-12T17:29:39.082848445Z" level=info msg="CreateContainer within sandbox \"19d18d1287e45d41e7b84c343fc31390472cf6a2e12e608bc748d2b68429e357\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295\"" Dec 12 17:29:39.084497 containerd[1527]: time="2025-12-12T17:29:39.084073771Z" level=info msg="StartContainer for \"5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295\"" Dec 12 17:29:39.085210 containerd[1527]: time="2025-12-12T17:29:39.085180660Z" level=info msg="connecting to shim 5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295" address="unix:///run/containerd/s/4d6a99492afcbbd02a3a812230275b1d38d886699b8a7b9e763f2ab68485f049" protocol=ttrpc version=3 Dec 12 17:29:39.114204 systemd[1]: Started cri-containerd-5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295.scope - libcontainer container 5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295. Dec 12 17:29:39.151679 containerd[1527]: time="2025-12-12T17:29:39.151642822Z" level=info msg="StartContainer for \"5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295\" returns successfully" Dec 12 17:29:39.458987 containerd[1527]: time="2025-12-12T17:29:39.458631610Z" level=info msg="CreateContainer within sandbox \"4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 17:29:39.466443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68-rootfs.mount: Deactivated successfully. Dec 12 17:29:39.474650 containerd[1527]: time="2025-12-12T17:29:39.474445692Z" level=info msg="Container 7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:39.488592 containerd[1527]: time="2025-12-12T17:29:39.488524463Z" level=info msg="CreateContainer within sandbox \"4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a\"" Dec 12 17:29:39.489159 containerd[1527]: time="2025-12-12T17:29:39.489121086Z" level=info msg="StartContainer for \"7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a\"" Dec 12 17:29:39.492797 containerd[1527]: time="2025-12-12T17:29:39.492758586Z" level=info msg="connecting to shim 7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a" address="unix:///run/containerd/s/17403a865aadbabfe10c83d23d88363d7913a4f7c7a53c6781e3c8752c3f5634" protocol=ttrpc version=3 Dec 12 17:29:39.499291 kubelet[2687]: I1212 17:29:39.499223 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-fs7fs" podStartSLOduration=0.911574402 podStartE2EDuration="10.499204207s" podCreationTimestamp="2025-12-12 17:29:29 +0000 UTC" firstStartedPulling="2025-12-12 17:29:29.481562778 +0000 UTC m=+6.192624774" lastFinishedPulling="2025-12-12 17:29:39.069192583 +0000 UTC m=+15.780254579" observedRunningTime="2025-12-12 17:29:39.496900671 +0000 UTC m=+16.207962667" watchObservedRunningTime="2025-12-12 17:29:39.499204207 +0000 UTC m=+16.210266243" Dec 12 17:29:39.527179 systemd[1]: Started cri-containerd-7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a.scope - libcontainer container 7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a. Dec 12 17:29:39.621073 systemd[1]: cri-containerd-7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a.scope: Deactivated successfully. Dec 12 17:29:39.621246 containerd[1527]: time="2025-12-12T17:29:39.621127874Z" level=info msg="StartContainer for \"7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a\" returns successfully" Dec 12 17:29:39.623579 systemd[1]: cri-containerd-7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a.scope: Consumed 31ms CPU time, 7.8M memory peak, 6.1M read from disk. Dec 12 17:29:39.626739 containerd[1527]: time="2025-12-12T17:29:39.626701480Z" level=info msg="received container exit event container_id:\"7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a\" id:\"7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a\" pid:3279 exited_at:{seconds:1765560579 nanos:626288852}" Dec 12 17:29:39.656332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a-rootfs.mount: Deactivated successfully. Dec 12 17:29:40.470714 containerd[1527]: time="2025-12-12T17:29:40.470186416Z" level=info msg="CreateContainer within sandbox \"4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 17:29:40.479674 containerd[1527]: time="2025-12-12T17:29:40.479638607Z" level=info msg="Container 14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:40.482693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2162464432.mount: Deactivated successfully. Dec 12 17:29:40.490119 containerd[1527]: time="2025-12-12T17:29:40.490011614Z" level=info msg="CreateContainer within sandbox \"4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a\"" Dec 12 17:29:40.490939 containerd[1527]: time="2025-12-12T17:29:40.490811113Z" level=info msg="StartContainer for \"14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a\"" Dec 12 17:29:40.492068 containerd[1527]: time="2025-12-12T17:29:40.491970043Z" level=info msg="connecting to shim 14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a" address="unix:///run/containerd/s/17403a865aadbabfe10c83d23d88363d7913a4f7c7a53c6781e3c8752c3f5634" protocol=ttrpc version=3 Dec 12 17:29:40.513170 systemd[1]: Started cri-containerd-14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a.scope - libcontainer container 14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a. Dec 12 17:29:40.537422 systemd[1]: cri-containerd-14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a.scope: Deactivated successfully. Dec 12 17:29:40.545066 containerd[1527]: time="2025-12-12T17:29:40.545016966Z" level=info msg="received container exit event container_id:\"14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a\" id:\"14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a\" pid:3320 exited_at:{seconds:1765560580 nanos:538607095}" Dec 12 17:29:40.552659 containerd[1527]: time="2025-12-12T17:29:40.552619246Z" level=info msg="StartContainer for \"14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a\" returns successfully" Dec 12 17:29:40.558902 containerd[1527]: time="2025-12-12T17:29:40.547607058Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod250c4f7b_fa61_4469_b3af_2ac66ad11387.slice/cri-containerd-14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a.scope/memory.events\": no such file or directory" Dec 12 17:29:40.564795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a-rootfs.mount: Deactivated successfully. Dec 12 17:29:41.488661 containerd[1527]: time="2025-12-12T17:29:41.486240961Z" level=info msg="CreateContainer within sandbox \"4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 17:29:41.502853 containerd[1527]: time="2025-12-12T17:29:41.502808506Z" level=info msg="Container 0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:41.514148 containerd[1527]: time="2025-12-12T17:29:41.514100263Z" level=info msg="CreateContainer within sandbox \"4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5\"" Dec 12 17:29:41.514673 containerd[1527]: time="2025-12-12T17:29:41.514634610Z" level=info msg="StartContainer for \"0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5\"" Dec 12 17:29:41.516594 containerd[1527]: time="2025-12-12T17:29:41.516433125Z" level=info msg="connecting to shim 0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5" address="unix:///run/containerd/s/17403a865aadbabfe10c83d23d88363d7913a4f7c7a53c6781e3c8752c3f5634" protocol=ttrpc version=3 Dec 12 17:29:41.541208 systemd[1]: Started cri-containerd-0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5.scope - libcontainer container 0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5. Dec 12 17:29:41.592878 containerd[1527]: time="2025-12-12T17:29:41.592805010Z" level=info msg="StartContainer for \"0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5\" returns successfully" Dec 12 17:29:41.740247 kubelet[2687]: I1212 17:29:41.740069 2687 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 12 17:29:41.821353 systemd[1]: Created slice kubepods-burstable-podf0982e72_a4b9_4869_b251_e9b4f6b47659.slice - libcontainer container kubepods-burstable-podf0982e72_a4b9_4869_b251_e9b4f6b47659.slice. Dec 12 17:29:41.840485 systemd[1]: Created slice kubepods-burstable-pod34cd8c0b_b9f2_4dc3_8e1d_142b0fed674d.slice - libcontainer container kubepods-burstable-pod34cd8c0b_b9f2_4dc3_8e1d_142b0fed674d.slice. Dec 12 17:29:41.910968 kubelet[2687]: I1212 17:29:41.910905 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzvlg\" (UniqueName: \"kubernetes.io/projected/34cd8c0b-b9f2-4dc3-8e1d-142b0fed674d-kube-api-access-nzvlg\") pod \"coredns-66bc5c9577-k94gr\" (UID: \"34cd8c0b-b9f2-4dc3-8e1d-142b0fed674d\") " pod="kube-system/coredns-66bc5c9577-k94gr" Dec 12 17:29:41.911162 kubelet[2687]: I1212 17:29:41.911026 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0982e72-a4b9-4869-b251-e9b4f6b47659-config-volume\") pod \"coredns-66bc5c9577-pr8sn\" (UID: \"f0982e72-a4b9-4869-b251-e9b4f6b47659\") " pod="kube-system/coredns-66bc5c9577-pr8sn" Dec 12 17:29:41.911162 kubelet[2687]: I1212 17:29:41.911066 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvt2x\" (UniqueName: \"kubernetes.io/projected/f0982e72-a4b9-4869-b251-e9b4f6b47659-kube-api-access-wvt2x\") pod \"coredns-66bc5c9577-pr8sn\" (UID: \"f0982e72-a4b9-4869-b251-e9b4f6b47659\") " pod="kube-system/coredns-66bc5c9577-pr8sn" Dec 12 17:29:41.911162 kubelet[2687]: I1212 17:29:41.911084 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34cd8c0b-b9f2-4dc3-8e1d-142b0fed674d-config-volume\") pod \"coredns-66bc5c9577-k94gr\" (UID: \"34cd8c0b-b9f2-4dc3-8e1d-142b0fed674d\") " pod="kube-system/coredns-66bc5c9577-k94gr" Dec 12 17:29:42.126813 containerd[1527]: time="2025-12-12T17:29:42.126685657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pr8sn,Uid:f0982e72-a4b9-4869-b251-e9b4f6b47659,Namespace:kube-system,Attempt:0,}" Dec 12 17:29:42.145902 containerd[1527]: time="2025-12-12T17:29:42.145512167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k94gr,Uid:34cd8c0b-b9f2-4dc3-8e1d-142b0fed674d,Namespace:kube-system,Attempt:0,}" Dec 12 17:29:42.503946 kubelet[2687]: I1212 17:29:42.503815 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pxgrl" podStartSLOduration=6.497632718 podStartE2EDuration="14.503798729s" podCreationTimestamp="2025-12-12 17:29:28 +0000 UTC" firstStartedPulling="2025-12-12 17:29:29.3282952 +0000 UTC m=+6.039357196" lastFinishedPulling="2025-12-12 17:29:37.334461211 +0000 UTC m=+14.045523207" observedRunningTime="2025-12-12 17:29:42.503419858 +0000 UTC m=+19.214481854" watchObservedRunningTime="2025-12-12 17:29:42.503798729 +0000 UTC m=+19.214860725" Dec 12 17:29:43.662889 systemd-networkd[1454]: cilium_host: Link UP Dec 12 17:29:43.663121 systemd-networkd[1454]: cilium_net: Link UP Dec 12 17:29:43.663410 systemd-networkd[1454]: cilium_net: Gained carrier Dec 12 17:29:43.663709 systemd-networkd[1454]: cilium_host: Gained carrier Dec 12 17:29:43.745083 systemd-networkd[1454]: cilium_vxlan: Link UP Dec 12 17:29:43.745227 systemd-networkd[1454]: cilium_vxlan: Gained carrier Dec 12 17:29:44.015030 kernel: NET: Registered PF_ALG protocol family Dec 12 17:29:44.121517 systemd-networkd[1454]: cilium_host: Gained IPv6LL Dec 12 17:29:44.473609 systemd-networkd[1454]: cilium_net: Gained IPv6LL Dec 12 17:29:44.652543 systemd-networkd[1454]: lxc_health: Link UP Dec 12 17:29:44.654658 systemd-networkd[1454]: lxc_health: Gained carrier Dec 12 17:29:45.166241 systemd-networkd[1454]: lxc29b4ceb424b9: Link UP Dec 12 17:29:45.179008 kernel: eth0: renamed from tmp51914 Dec 12 17:29:45.182051 systemd-networkd[1454]: lxc29b4ceb424b9: Gained carrier Dec 12 17:29:45.186286 systemd-networkd[1454]: lxcdf5bec543f00: Link UP Dec 12 17:29:45.198002 kernel: eth0: renamed from tmp20edb Dec 12 17:29:45.200098 systemd-networkd[1454]: lxcdf5bec543f00: Gained carrier Dec 12 17:29:45.625195 systemd-networkd[1454]: cilium_vxlan: Gained IPv6LL Dec 12 17:29:46.649226 systemd-networkd[1454]: lxcdf5bec543f00: Gained IPv6LL Dec 12 17:29:46.649811 systemd-networkd[1454]: lxc_health: Gained IPv6LL Dec 12 17:29:47.225231 systemd-networkd[1454]: lxc29b4ceb424b9: Gained IPv6LL Dec 12 17:29:48.786739 containerd[1527]: time="2025-12-12T17:29:48.786533534Z" level=info msg="connecting to shim 20edb87e373e3aafffe085fa90a3360e71a9562279f3b883562893b8af4f7bd9" address="unix:///run/containerd/s/bcee85b6877cb3b61c3492254c0732e5fd5b515b6224c503faae3ca4a541023d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:29:48.794115 containerd[1527]: time="2025-12-12T17:29:48.794071957Z" level=info msg="connecting to shim 51914a924bd0bf43ed3a015eabb565ec692003cf91b0b491100df69b089e4aaa" address="unix:///run/containerd/s/8bfb0947cb340bdfbbe0c81eea72c972f5830b2d2e547d817d596d103a0b9176" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:29:48.820290 systemd[1]: Started cri-containerd-20edb87e373e3aafffe085fa90a3360e71a9562279f3b883562893b8af4f7bd9.scope - libcontainer container 20edb87e373e3aafffe085fa90a3360e71a9562279f3b883562893b8af4f7bd9. Dec 12 17:29:48.823258 systemd[1]: Started cri-containerd-51914a924bd0bf43ed3a015eabb565ec692003cf91b0b491100df69b089e4aaa.scope - libcontainer container 51914a924bd0bf43ed3a015eabb565ec692003cf91b0b491100df69b089e4aaa. Dec 12 17:29:48.834874 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:29:48.836957 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:29:48.860237 containerd[1527]: time="2025-12-12T17:29:48.860195473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pr8sn,Uid:f0982e72-a4b9-4869-b251-e9b4f6b47659,Namespace:kube-system,Attempt:0,} returns sandbox id \"51914a924bd0bf43ed3a015eabb565ec692003cf91b0b491100df69b089e4aaa\"" Dec 12 17:29:48.862246 containerd[1527]: time="2025-12-12T17:29:48.862195037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k94gr,Uid:34cd8c0b-b9f2-4dc3-8e1d-142b0fed674d,Namespace:kube-system,Attempt:0,} returns sandbox id \"20edb87e373e3aafffe085fa90a3360e71a9562279f3b883562893b8af4f7bd9\"" Dec 12 17:29:48.866621 containerd[1527]: time="2025-12-12T17:29:48.866113605Z" level=info msg="CreateContainer within sandbox \"51914a924bd0bf43ed3a015eabb565ec692003cf91b0b491100df69b089e4aaa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:29:48.869606 containerd[1527]: time="2025-12-12T17:29:48.868828236Z" level=info msg="CreateContainer within sandbox \"20edb87e373e3aafffe085fa90a3360e71a9562279f3b883562893b8af4f7bd9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:29:48.879286 containerd[1527]: time="2025-12-12T17:29:48.879234326Z" level=info msg="Container 806e7f2c5a9ebfe47e43f78213f85b1651d5894fc3e291881e53945416d6e00f: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:48.884309 containerd[1527]: time="2025-12-12T17:29:48.884237235Z" level=info msg="Container 59e7a58ffaeeb8bf6d4b16279b7006fb747ca7e4fcb3446040dff68d2709e275: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:48.888263 containerd[1527]: time="2025-12-12T17:29:48.888213003Z" level=info msg="CreateContainer within sandbox \"51914a924bd0bf43ed3a015eabb565ec692003cf91b0b491100df69b089e4aaa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"806e7f2c5a9ebfe47e43f78213f85b1651d5894fc3e291881e53945416d6e00f\"" Dec 12 17:29:48.888902 containerd[1527]: time="2025-12-12T17:29:48.888857751Z" level=info msg="StartContainer for \"806e7f2c5a9ebfe47e43f78213f85b1651d5894fc3e291881e53945416d6e00f\"" Dec 12 17:29:48.890901 containerd[1527]: time="2025-12-12T17:29:48.890861635Z" level=info msg="CreateContainer within sandbox \"20edb87e373e3aafffe085fa90a3360e71a9562279f3b883562893b8af4f7bd9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"59e7a58ffaeeb8bf6d4b16279b7006fb747ca7e4fcb3446040dff68d2709e275\"" Dec 12 17:29:48.891116 containerd[1527]: time="2025-12-12T17:29:48.890885634Z" level=info msg="connecting to shim 806e7f2c5a9ebfe47e43f78213f85b1651d5894fc3e291881e53945416d6e00f" address="unix:///run/containerd/s/8bfb0947cb340bdfbbe0c81eea72c972f5830b2d2e547d817d596d103a0b9176" protocol=ttrpc version=3 Dec 12 17:29:48.891842 containerd[1527]: time="2025-12-12T17:29:48.891811057Z" level=info msg="StartContainer for \"59e7a58ffaeeb8bf6d4b16279b7006fb747ca7e4fcb3446040dff68d2709e275\"" Dec 12 17:29:48.893111 containerd[1527]: time="2025-12-12T17:29:48.893077154Z" level=info msg="connecting to shim 59e7a58ffaeeb8bf6d4b16279b7006fb747ca7e4fcb3446040dff68d2709e275" address="unix:///run/containerd/s/bcee85b6877cb3b61c3492254c0732e5fd5b515b6224c503faae3ca4a541023d" protocol=ttrpc version=3 Dec 12 17:29:48.914182 systemd[1]: Started cri-containerd-806e7f2c5a9ebfe47e43f78213f85b1651d5894fc3e291881e53945416d6e00f.scope - libcontainer container 806e7f2c5a9ebfe47e43f78213f85b1651d5894fc3e291881e53945416d6e00f. Dec 12 17:29:48.917512 systemd[1]: Started cri-containerd-59e7a58ffaeeb8bf6d4b16279b7006fb747ca7e4fcb3446040dff68d2709e275.scope - libcontainer container 59e7a58ffaeeb8bf6d4b16279b7006fb747ca7e4fcb3446040dff68d2709e275. Dec 12 17:29:48.944970 containerd[1527]: time="2025-12-12T17:29:48.944919770Z" level=info msg="StartContainer for \"806e7f2c5a9ebfe47e43f78213f85b1651d5894fc3e291881e53945416d6e00f\" returns successfully" Dec 12 17:29:48.951945 containerd[1527]: time="2025-12-12T17:29:48.951889403Z" level=info msg="StartContainer for \"59e7a58ffaeeb8bf6d4b16279b7006fb747ca7e4fcb3446040dff68d2709e275\" returns successfully" Dec 12 17:29:49.514807 kubelet[2687]: I1212 17:29:49.514737 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-k94gr" podStartSLOduration=20.5147193 podStartE2EDuration="20.5147193s" podCreationTimestamp="2025-12-12 17:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:29:49.514270428 +0000 UTC m=+26.225332424" watchObservedRunningTime="2025-12-12 17:29:49.5147193 +0000 UTC m=+26.225781336" Dec 12 17:29:49.529690 kubelet[2687]: I1212 17:29:49.528407 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pr8sn" podStartSLOduration=20.528388901 podStartE2EDuration="20.528388901s" podCreationTimestamp="2025-12-12 17:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:29:49.528172025 +0000 UTC m=+26.239234021" watchObservedRunningTime="2025-12-12 17:29:49.528388901 +0000 UTC m=+26.239450897" Dec 12 17:29:49.764155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount394000472.mount: Deactivated successfully. Dec 12 17:29:53.586533 systemd[1]: Started sshd@7-10.0.0.53:22-10.0.0.1:57286.service - OpenSSH per-connection server daemon (10.0.0.1:57286). Dec 12 17:29:53.656364 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 57286 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:29:53.661164 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:53.667069 systemd-logind[1505]: New session 8 of user core. Dec 12 17:29:53.672158 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:29:53.817717 sshd[4048]: Connection closed by 10.0.0.1 port 57286 Dec 12 17:29:53.818095 sshd-session[4045]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:53.822448 systemd[1]: sshd@7-10.0.0.53:22-10.0.0.1:57286.service: Deactivated successfully. Dec 12 17:29:53.826337 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:29:53.827224 systemd-logind[1505]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:29:53.828703 systemd-logind[1505]: Removed session 8. Dec 12 17:29:58.834141 systemd[1]: Started sshd@8-10.0.0.53:22-10.0.0.1:57302.service - OpenSSH per-connection server daemon (10.0.0.1:57302). Dec 12 17:29:58.907147 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 57302 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:29:58.908529 sshd-session[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:58.913036 systemd-logind[1505]: New session 9 of user core. Dec 12 17:29:58.921132 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:29:59.036024 sshd[4072]: Connection closed by 10.0.0.1 port 57302 Dec 12 17:29:59.036136 sshd-session[4069]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:59.042113 systemd-logind[1505]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:29:59.042570 systemd[1]: sshd@8-10.0.0.53:22-10.0.0.1:57302.service: Deactivated successfully. Dec 12 17:29:59.045438 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:29:59.048548 systemd-logind[1505]: Removed session 9. Dec 12 17:30:04.052937 systemd[1]: Started sshd@9-10.0.0.53:22-10.0.0.1:36592.service - OpenSSH per-connection server daemon (10.0.0.1:36592). Dec 12 17:30:04.099791 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 36592 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:30:04.100493 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:04.104877 systemd-logind[1505]: New session 10 of user core. Dec 12 17:30:04.116161 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:30:04.228251 sshd[4092]: Connection closed by 10.0.0.1 port 36592 Dec 12 17:30:04.228808 sshd-session[4089]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:04.231675 systemd[1]: sshd@9-10.0.0.53:22-10.0.0.1:36592.service: Deactivated successfully. Dec 12 17:30:04.233289 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:30:04.235155 systemd-logind[1505]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:30:04.236596 systemd-logind[1505]: Removed session 10. Dec 12 17:30:09.250831 systemd[1]: Started sshd@10-10.0.0.53:22-10.0.0.1:36604.service - OpenSSH per-connection server daemon (10.0.0.1:36604). Dec 12 17:30:09.307190 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 36604 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:30:09.308495 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:09.316332 systemd-logind[1505]: New session 11 of user core. Dec 12 17:30:09.326187 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:30:09.465392 sshd[4109]: Connection closed by 10.0.0.1 port 36604 Dec 12 17:30:09.464360 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:09.467876 systemd[1]: sshd@10-10.0.0.53:22-10.0.0.1:36604.service: Deactivated successfully. Dec 12 17:30:09.469495 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:30:09.471590 systemd-logind[1505]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:30:09.475525 systemd-logind[1505]: Removed session 11. Dec 12 17:30:14.480511 systemd[1]: Started sshd@11-10.0.0.53:22-10.0.0.1:55068.service - OpenSSH per-connection server daemon (10.0.0.1:55068). Dec 12 17:30:14.544680 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 55068 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:30:14.546570 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:14.551404 systemd-logind[1505]: New session 12 of user core. Dec 12 17:30:14.557166 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:30:14.681543 sshd[4127]: Connection closed by 10.0.0.1 port 55068 Dec 12 17:30:14.681887 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:14.691182 systemd[1]: sshd@11-10.0.0.53:22-10.0.0.1:55068.service: Deactivated successfully. Dec 12 17:30:14.694937 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:30:14.696284 systemd-logind[1505]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:30:14.699668 systemd[1]: Started sshd@12-10.0.0.53:22-10.0.0.1:55070.service - OpenSSH per-connection server daemon (10.0.0.1:55070). Dec 12 17:30:14.701053 systemd-logind[1505]: Removed session 12. Dec 12 17:30:14.769287 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 55070 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:30:14.770601 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:14.776270 systemd-logind[1505]: New session 13 of user core. Dec 12 17:30:14.788170 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:30:14.960386 sshd[4144]: Connection closed by 10.0.0.1 port 55070 Dec 12 17:30:14.961472 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:14.968213 systemd[1]: sshd@12-10.0.0.53:22-10.0.0.1:55070.service: Deactivated successfully. Dec 12 17:30:14.970041 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:30:14.973162 systemd-logind[1505]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:30:14.975845 systemd[1]: Started sshd@13-10.0.0.53:22-10.0.0.1:55078.service - OpenSSH per-connection server daemon (10.0.0.1:55078). Dec 12 17:30:14.978848 systemd-logind[1505]: Removed session 13. Dec 12 17:30:15.046713 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 55078 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:30:15.048572 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:15.052349 systemd-logind[1505]: New session 14 of user core. Dec 12 17:30:15.059146 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:30:15.173001 sshd[4158]: Connection closed by 10.0.0.1 port 55078 Dec 12 17:30:15.173470 sshd-session[4155]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:15.177265 systemd[1]: sshd@13-10.0.0.53:22-10.0.0.1:55078.service: Deactivated successfully. Dec 12 17:30:15.178905 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:30:15.181451 systemd-logind[1505]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:30:15.184389 systemd-logind[1505]: Removed session 14. Dec 12 17:30:20.188198 systemd[1]: Started sshd@14-10.0.0.53:22-10.0.0.1:55088.service - OpenSSH per-connection server daemon (10.0.0.1:55088). Dec 12 17:30:20.237271 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 55088 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:30:20.238644 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:20.242731 systemd-logind[1505]: New session 15 of user core. Dec 12 17:30:20.252170 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:30:20.367173 sshd[4176]: Connection closed by 10.0.0.1 port 55088 Dec 12 17:30:20.367532 sshd-session[4173]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:20.370836 systemd[1]: sshd@14-10.0.0.53:22-10.0.0.1:55088.service: Deactivated successfully. Dec 12 17:30:20.372468 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:30:20.373213 systemd-logind[1505]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:30:20.374363 systemd-logind[1505]: Removed session 15. Dec 12 17:30:25.382433 systemd[1]: Started sshd@15-10.0.0.53:22-10.0.0.1:51946.service - OpenSSH per-connection server daemon (10.0.0.1:51946). Dec 12 17:30:25.456776 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 51946 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:30:25.458171 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:25.464890 systemd-logind[1505]: New session 16 of user core. Dec 12 17:30:25.473148 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:30:25.594549 sshd[4195]: Connection closed by 10.0.0.1 port 51946 Dec 12 17:30:25.593571 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:25.617234 systemd[1]: sshd@15-10.0.0.53:22-10.0.0.1:51946.service: Deactivated successfully. Dec 12 17:30:25.619548 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:30:25.620466 systemd-logind[1505]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:30:25.623607 systemd[1]: Started sshd@16-10.0.0.53:22-10.0.0.1:51954.service - OpenSSH per-connection server daemon (10.0.0.1:51954). Dec 12 17:30:25.624195 systemd-logind[1505]: Removed session 16. Dec 12 17:30:25.683748 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 51954 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:30:25.685520 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:25.689999 systemd-logind[1505]: New session 17 of user core. Dec 12 17:30:25.702146 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:30:25.881168 sshd[4211]: Connection closed by 10.0.0.1 port 51954 Dec 12 17:30:25.882075 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:25.896705 systemd[1]: sshd@16-10.0.0.53:22-10.0.0.1:51954.service: Deactivated successfully. Dec 12 17:30:25.899703 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:30:25.900443 systemd-logind[1505]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:30:25.903353 systemd[1]: Started sshd@17-10.0.0.53:22-10.0.0.1:51970.service - OpenSSH per-connection server daemon (10.0.0.1:51970). Dec 12 17:30:25.904385 systemd-logind[1505]: Removed session 17. Dec 12 17:30:25.976840 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 51970 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:30:25.978089 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:25.984031 systemd-logind[1505]: New session 18 of user core. Dec 12 17:30:26.004167 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:30:26.614468 sshd[4226]: Connection closed by 10.0.0.1 port 51970 Dec 12 17:30:26.614835 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:26.624102 systemd[1]: sshd@17-10.0.0.53:22-10.0.0.1:51970.service: Deactivated successfully. Dec 12 17:30:26.629111 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:30:26.631237 systemd-logind[1505]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:30:26.634695 systemd[1]: Started sshd@18-10.0.0.53:22-10.0.0.1:51976.service - OpenSSH per-connection server daemon (10.0.0.1:51976). Dec 12 17:30:26.635511 systemd-logind[1505]: Removed session 18. Dec 12 17:30:26.694846 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 51976 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:30:26.696186 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:26.700052 systemd-logind[1505]: New session 19 of user core. Dec 12 17:30:26.707105 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:30:26.941630 sshd[4247]: Connection closed by 10.0.0.1 port 51976 Dec 12 17:30:26.941962 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:26.954252 systemd[1]: sshd@18-10.0.0.53:22-10.0.0.1:51976.service: Deactivated successfully. Dec 12 17:30:26.957940 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:30:26.961364 systemd-logind[1505]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:30:26.967299 systemd[1]: Started sshd@19-10.0.0.53:22-10.0.0.1:51978.service - OpenSSH per-connection server daemon (10.0.0.1:51978). Dec 12 17:30:26.968055 systemd-logind[1505]: Removed session 19. Dec 12 17:30:27.028048 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 51978 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:30:27.029098 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:27.033032 systemd-logind[1505]: New session 20 of user core. Dec 12 17:30:27.039198 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:30:27.151957 sshd[4262]: Connection closed by 10.0.0.1 port 51978 Dec 12 17:30:27.152278 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:27.155917 systemd[1]: sshd@19-10.0.0.53:22-10.0.0.1:51978.service: Deactivated successfully. Dec 12 17:30:27.158516 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:30:27.159872 systemd-logind[1505]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:30:27.161506 systemd-logind[1505]: Removed session 20. Dec 12 17:30:32.164183 systemd[1]: Started sshd@20-10.0.0.53:22-10.0.0.1:58322.service - OpenSSH per-connection server daemon (10.0.0.1:58322). Dec 12 17:30:32.241800 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 58322 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:30:32.243078 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:32.246944 systemd-logind[1505]: New session 21 of user core. Dec 12 17:30:32.259179 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 17:30:32.368546 sshd[4284]: Connection closed by 10.0.0.1 port 58322 Dec 12 17:30:32.368866 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:32.372734 systemd[1]: sshd@20-10.0.0.53:22-10.0.0.1:58322.service: Deactivated successfully. Dec 12 17:30:32.374555 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 17:30:32.375328 systemd-logind[1505]: Session 21 logged out. Waiting for processes to exit. Dec 12 17:30:32.376496 systemd-logind[1505]: Removed session 21. Dec 12 17:30:37.381518 systemd[1]: Started sshd@21-10.0.0.53:22-10.0.0.1:58332.service - OpenSSH per-connection server daemon (10.0.0.1:58332). Dec 12 17:30:37.452234 sshd[4297]: Accepted publickey for core from 10.0.0.1 port 58332 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:30:37.453484 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:37.458006 systemd-logind[1505]: New session 22 of user core. Dec 12 17:30:37.464162 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 17:30:37.573291 sshd[4300]: Connection closed by 10.0.0.1 port 58332 Dec 12 17:30:37.573618 sshd-session[4297]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:37.577043 systemd[1]: sshd@21-10.0.0.53:22-10.0.0.1:58332.service: Deactivated successfully. Dec 12 17:30:37.579092 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 17:30:37.579846 systemd-logind[1505]: Session 22 logged out. Waiting for processes to exit. Dec 12 17:30:37.580917 systemd-logind[1505]: Removed session 22. Dec 12 17:30:42.588430 systemd[1]: Started sshd@22-10.0.0.53:22-10.0.0.1:51640.service - OpenSSH per-connection server daemon (10.0.0.1:51640). Dec 12 17:30:42.654353 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 51640 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:30:42.655713 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:42.659710 systemd-logind[1505]: New session 23 of user core. Dec 12 17:30:42.669196 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 17:30:42.784547 sshd[4316]: Connection closed by 10.0.0.1 port 51640 Dec 12 17:30:42.785127 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:42.797415 systemd[1]: sshd@22-10.0.0.53:22-10.0.0.1:51640.service: Deactivated successfully. Dec 12 17:30:42.799692 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 17:30:42.800493 systemd-logind[1505]: Session 23 logged out. Waiting for processes to exit. Dec 12 17:30:42.803364 systemd[1]: Started sshd@23-10.0.0.53:22-10.0.0.1:51656.service - OpenSSH per-connection server daemon (10.0.0.1:51656). Dec 12 17:30:42.804260 systemd-logind[1505]: Removed session 23. Dec 12 17:30:42.872384 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 51656 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:30:42.873460 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:42.877565 systemd-logind[1505]: New session 24 of user core. Dec 12 17:30:42.892209 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 17:30:44.967000 containerd[1527]: time="2025-12-12T17:30:44.966932070Z" level=info msg="StopContainer for \"5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295\" with timeout 30 (s)" Dec 12 17:30:44.973478 containerd[1527]: time="2025-12-12T17:30:44.973414972Z" level=info msg="Stop container \"5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295\" with signal terminated" Dec 12 17:30:44.991172 systemd[1]: cri-containerd-5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295.scope: Deactivated successfully. Dec 12 17:30:44.995654 containerd[1527]: time="2025-12-12T17:30:44.994170354Z" level=info msg="received container exit event container_id:\"5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295\" id:\"5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295\" pid:3243 exited_at:{seconds:1765560644 nanos:993729035}" Dec 12 17:30:45.017006 containerd[1527]: time="2025-12-12T17:30:45.016224651Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:30:45.028956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295-rootfs.mount: Deactivated successfully. Dec 12 17:30:45.030810 containerd[1527]: time="2025-12-12T17:30:45.030654049Z" level=info msg="StopContainer for \"0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5\" with timeout 2 (s)" Dec 12 17:30:45.031283 containerd[1527]: time="2025-12-12T17:30:45.031172088Z" level=info msg="Stop container \"0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5\" with signal terminated" Dec 12 17:30:45.041408 containerd[1527]: time="2025-12-12T17:30:45.041089819Z" level=info msg="StopContainer for \"5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295\" returns successfully" Dec 12 17:30:45.043345 systemd-networkd[1454]: lxc_health: Link DOWN Dec 12 17:30:45.043747 systemd-networkd[1454]: lxc_health: Lost carrier Dec 12 17:30:45.049526 containerd[1527]: time="2025-12-12T17:30:45.049339955Z" level=info msg="StopPodSandbox for \"19d18d1287e45d41e7b84c343fc31390472cf6a2e12e608bc748d2b68429e357\"" Dec 12 17:30:45.057892 systemd[1]: cri-containerd-0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5.scope: Deactivated successfully. Dec 12 17:30:45.058263 systemd[1]: cri-containerd-0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5.scope: Consumed 6.350s CPU time, 122.6M memory peak, 132K read from disk, 12.9M written to disk. Dec 12 17:30:45.060214 containerd[1527]: time="2025-12-12T17:30:45.060170604Z" level=info msg="received container exit event container_id:\"0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5\" id:\"0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5\" pid:3356 exited_at:{seconds:1765560645 nanos:58314569}" Dec 12 17:30:45.060494 containerd[1527]: time="2025-12-12T17:30:45.060256524Z" level=info msg="Container to stop \"5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:30:45.074849 systemd[1]: cri-containerd-19d18d1287e45d41e7b84c343fc31390472cf6a2e12e608bc748d2b68429e357.scope: Deactivated successfully. Dec 12 17:30:45.078392 containerd[1527]: time="2025-12-12T17:30:45.078340311Z" level=info msg="received sandbox exit event container_id:\"19d18d1287e45d41e7b84c343fc31390472cf6a2e12e608bc748d2b68429e357\" id:\"19d18d1287e45d41e7b84c343fc31390472cf6a2e12e608bc748d2b68429e357\" exit_status:137 exited_at:{seconds:1765560645 nanos:78059912}" monitor_name=podsandbox Dec 12 17:30:45.083371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5-rootfs.mount: Deactivated successfully. Dec 12 17:30:45.101846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19d18d1287e45d41e7b84c343fc31390472cf6a2e12e608bc748d2b68429e357-rootfs.mount: Deactivated successfully. Dec 12 17:30:45.109207 containerd[1527]: time="2025-12-12T17:30:45.109167902Z" level=info msg="StopContainer for \"0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5\" returns successfully" Dec 12 17:30:45.109758 containerd[1527]: time="2025-12-12T17:30:45.109729981Z" level=info msg="shim disconnected" id=19d18d1287e45d41e7b84c343fc31390472cf6a2e12e608bc748d2b68429e357 namespace=k8s.io Dec 12 17:30:45.109816 containerd[1527]: time="2025-12-12T17:30:45.109757021Z" level=warning msg="cleaning up after shim disconnected" id=19d18d1287e45d41e7b84c343fc31390472cf6a2e12e608bc748d2b68429e357 namespace=k8s.io Dec 12 17:30:45.109816 containerd[1527]: time="2025-12-12T17:30:45.109789060Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 17:30:45.110572 containerd[1527]: time="2025-12-12T17:30:45.110510138Z" level=info msg="StopPodSandbox for \"4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928\"" Dec 12 17:30:45.110816 containerd[1527]: time="2025-12-12T17:30:45.110580378Z" level=info msg="Container to stop \"14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:30:45.110816 containerd[1527]: time="2025-12-12T17:30:45.110805578Z" level=info msg="Container to stop \"0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:30:45.110883 containerd[1527]: time="2025-12-12T17:30:45.110818098Z" level=info msg="Container to stop \"586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:30:45.110883 containerd[1527]: time="2025-12-12T17:30:45.110828457Z" level=info msg="Container to stop \"fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:30:45.110883 containerd[1527]: time="2025-12-12T17:30:45.110836177Z" level=info msg="Container to stop \"7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:30:45.117348 systemd[1]: cri-containerd-4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928.scope: Deactivated successfully. Dec 12 17:30:45.120158 containerd[1527]: time="2025-12-12T17:30:45.119915951Z" level=info msg="received sandbox exit event container_id:\"4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928\" id:\"4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928\" exit_status:137 exited_at:{seconds:1765560645 nanos:119408513}" monitor_name=podsandbox Dec 12 17:30:45.128573 containerd[1527]: time="2025-12-12T17:30:45.128527086Z" level=info msg="received sandbox container exit event sandbox_id:\"19d18d1287e45d41e7b84c343fc31390472cf6a2e12e608bc748d2b68429e357\" exit_status:137 exited_at:{seconds:1765560645 nanos:78059912}" monitor_name=criService Dec 12 17:30:45.129276 containerd[1527]: time="2025-12-12T17:30:45.129237884Z" level=info msg="TearDown network for sandbox \"19d18d1287e45d41e7b84c343fc31390472cf6a2e12e608bc748d2b68429e357\" successfully" Dec 12 17:30:45.129323 containerd[1527]: time="2025-12-12T17:30:45.129277764Z" level=info msg="StopPodSandbox for \"19d18d1287e45d41e7b84c343fc31390472cf6a2e12e608bc748d2b68429e357\" returns successfully" Dec 12 17:30:45.130559 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19d18d1287e45d41e7b84c343fc31390472cf6a2e12e608bc748d2b68429e357-shm.mount: Deactivated successfully. Dec 12 17:30:45.150814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928-rootfs.mount: Deactivated successfully. Dec 12 17:30:45.158333 containerd[1527]: time="2025-12-12T17:30:45.158178201Z" level=info msg="shim disconnected" id=4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928 namespace=k8s.io Dec 12 17:30:45.158333 containerd[1527]: time="2025-12-12T17:30:45.158212721Z" level=warning msg="cleaning up after shim disconnected" id=4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928 namespace=k8s.io Dec 12 17:30:45.158333 containerd[1527]: time="2025-12-12T17:30:45.158243921Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 17:30:45.173851 containerd[1527]: time="2025-12-12T17:30:45.173769756Z" level=info msg="received sandbox container exit event sandbox_id:\"4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928\" exit_status:137 exited_at:{seconds:1765560645 nanos:119408513}" monitor_name=criService Dec 12 17:30:45.174006 containerd[1527]: time="2025-12-12T17:30:45.173959475Z" level=info msg="TearDown network for sandbox \"4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928\" successfully" Dec 12 17:30:45.174038 containerd[1527]: time="2025-12-12T17:30:45.174003435Z" level=info msg="StopPodSandbox for \"4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928\" returns successfully" Dec 12 17:30:45.339749 kubelet[2687]: I1212 17:30:45.339611 2687 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-hostproc\") pod \"250c4f7b-fa61-4469-b3af-2ac66ad11387\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " Dec 12 17:30:45.339749 kubelet[2687]: I1212 17:30:45.339666 2687 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqzkz\" (UniqueName: \"kubernetes.io/projected/250c4f7b-fa61-4469-b3af-2ac66ad11387-kube-api-access-rqzkz\") pod \"250c4f7b-fa61-4469-b3af-2ac66ad11387\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " Dec 12 17:30:45.339749 kubelet[2687]: I1212 17:30:45.339684 2687 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-xtables-lock\") pod \"250c4f7b-fa61-4469-b3af-2ac66ad11387\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " Dec 12 17:30:45.339749 kubelet[2687]: I1212 17:30:45.339698 2687 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-host-proc-sys-kernel\") pod \"250c4f7b-fa61-4469-b3af-2ac66ad11387\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " Dec 12 17:30:45.339749 kubelet[2687]: I1212 17:30:45.339713 2687 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-cilium-cgroup\") pod \"250c4f7b-fa61-4469-b3af-2ac66ad11387\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " Dec 12 17:30:45.339749 kubelet[2687]: I1212 17:30:45.339730 2687 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-host-proc-sys-net\") pod \"250c4f7b-fa61-4469-b3af-2ac66ad11387\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " Dec 12 17:30:45.342074 kubelet[2687]: I1212 17:30:45.339747 2687 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/250c4f7b-fa61-4469-b3af-2ac66ad11387-clustermesh-secrets\") pod \"250c4f7b-fa61-4469-b3af-2ac66ad11387\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " Dec 12 17:30:45.342074 kubelet[2687]: I1212 17:30:45.339782 2687 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-lib-modules\") pod \"250c4f7b-fa61-4469-b3af-2ac66ad11387\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " Dec 12 17:30:45.342074 kubelet[2687]: I1212 17:30:45.339799 2687 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-cilium-run\") pod \"250c4f7b-fa61-4469-b3af-2ac66ad11387\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " Dec 12 17:30:45.342074 kubelet[2687]: I1212 17:30:45.339814 2687 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/250c4f7b-fa61-4469-b3af-2ac66ad11387-hubble-tls\") pod \"250c4f7b-fa61-4469-b3af-2ac66ad11387\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " Dec 12 17:30:45.342074 kubelet[2687]: I1212 17:30:45.339829 2687 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-bpf-maps\") pod \"250c4f7b-fa61-4469-b3af-2ac66ad11387\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " Dec 12 17:30:45.342074 kubelet[2687]: I1212 17:30:45.339845 2687 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/250c4f7b-fa61-4469-b3af-2ac66ad11387-cilium-config-path\") pod \"250c4f7b-fa61-4469-b3af-2ac66ad11387\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " Dec 12 17:30:45.342212 kubelet[2687]: I1212 17:30:45.339858 2687 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-cni-path\") pod \"250c4f7b-fa61-4469-b3af-2ac66ad11387\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " Dec 12 17:30:45.342212 kubelet[2687]: I1212 17:30:45.339874 2687 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82tmn\" (UniqueName: \"kubernetes.io/projected/23d4d2f2-63bf-4628-afe1-d8900fae32e7-kube-api-access-82tmn\") pod \"23d4d2f2-63bf-4628-afe1-d8900fae32e7\" (UID: \"23d4d2f2-63bf-4628-afe1-d8900fae32e7\") " Dec 12 17:30:45.342212 kubelet[2687]: I1212 17:30:45.339891 2687 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23d4d2f2-63bf-4628-afe1-d8900fae32e7-cilium-config-path\") pod \"23d4d2f2-63bf-4628-afe1-d8900fae32e7\" (UID: \"23d4d2f2-63bf-4628-afe1-d8900fae32e7\") " Dec 12 17:30:45.342212 kubelet[2687]: I1212 17:30:45.339906 2687 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-etc-cni-netd\") pod \"250c4f7b-fa61-4469-b3af-2ac66ad11387\" (UID: \"250c4f7b-fa61-4469-b3af-2ac66ad11387\") " Dec 12 17:30:45.344009 kubelet[2687]: I1212 17:30:45.343602 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "250c4f7b-fa61-4469-b3af-2ac66ad11387" (UID: "250c4f7b-fa61-4469-b3af-2ac66ad11387"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:30:45.344009 kubelet[2687]: I1212 17:30:45.343685 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "250c4f7b-fa61-4469-b3af-2ac66ad11387" (UID: "250c4f7b-fa61-4469-b3af-2ac66ad11387"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:30:45.344106 kubelet[2687]: I1212 17:30:45.344047 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "250c4f7b-fa61-4469-b3af-2ac66ad11387" (UID: "250c4f7b-fa61-4469-b3af-2ac66ad11387"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:30:45.344134 kubelet[2687]: I1212 17:30:45.344109 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-hostproc" (OuterVolumeSpecName: "hostproc") pod "250c4f7b-fa61-4469-b3af-2ac66ad11387" (UID: "250c4f7b-fa61-4469-b3af-2ac66ad11387"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:30:45.344587 kubelet[2687]: I1212 17:30:45.344555 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "250c4f7b-fa61-4469-b3af-2ac66ad11387" (UID: "250c4f7b-fa61-4469-b3af-2ac66ad11387"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:30:45.344662 kubelet[2687]: I1212 17:30:45.344611 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "250c4f7b-fa61-4469-b3af-2ac66ad11387" (UID: "250c4f7b-fa61-4469-b3af-2ac66ad11387"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:30:45.345164 kubelet[2687]: I1212 17:30:45.345037 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "250c4f7b-fa61-4469-b3af-2ac66ad11387" (UID: "250c4f7b-fa61-4469-b3af-2ac66ad11387"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:30:45.345309 kubelet[2687]: I1212 17:30:45.345292 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-cni-path" (OuterVolumeSpecName: "cni-path") pod "250c4f7b-fa61-4469-b3af-2ac66ad11387" (UID: "250c4f7b-fa61-4469-b3af-2ac66ad11387"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:30:45.345384 kubelet[2687]: I1212 17:30:45.345371 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "250c4f7b-fa61-4469-b3af-2ac66ad11387" (UID: "250c4f7b-fa61-4469-b3af-2ac66ad11387"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:30:45.346591 kubelet[2687]: I1212 17:30:45.345892 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "250c4f7b-fa61-4469-b3af-2ac66ad11387" (UID: "250c4f7b-fa61-4469-b3af-2ac66ad11387"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:30:45.347922 kubelet[2687]: I1212 17:30:45.347884 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23d4d2f2-63bf-4628-afe1-d8900fae32e7-kube-api-access-82tmn" (OuterVolumeSpecName: "kube-api-access-82tmn") pod "23d4d2f2-63bf-4628-afe1-d8900fae32e7" (UID: "23d4d2f2-63bf-4628-afe1-d8900fae32e7"). InnerVolumeSpecName "kube-api-access-82tmn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:30:45.348289 kubelet[2687]: I1212 17:30:45.348251 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250c4f7b-fa61-4469-b3af-2ac66ad11387-kube-api-access-rqzkz" (OuterVolumeSpecName: "kube-api-access-rqzkz") pod "250c4f7b-fa61-4469-b3af-2ac66ad11387" (UID: "250c4f7b-fa61-4469-b3af-2ac66ad11387"). InnerVolumeSpecName "kube-api-access-rqzkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:30:45.349175 kubelet[2687]: I1212 17:30:45.349077 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23d4d2f2-63bf-4628-afe1-d8900fae32e7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "23d4d2f2-63bf-4628-afe1-d8900fae32e7" (UID: "23d4d2f2-63bf-4628-afe1-d8900fae32e7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:30:45.349175 kubelet[2687]: I1212 17:30:45.349127 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/250c4f7b-fa61-4469-b3af-2ac66ad11387-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "250c4f7b-fa61-4469-b3af-2ac66ad11387" (UID: "250c4f7b-fa61-4469-b3af-2ac66ad11387"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:30:45.350050 kubelet[2687]: I1212 17:30:45.350003 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250c4f7b-fa61-4469-b3af-2ac66ad11387-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "250c4f7b-fa61-4469-b3af-2ac66ad11387" (UID: "250c4f7b-fa61-4469-b3af-2ac66ad11387"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:30:45.350617 kubelet[2687]: I1212 17:30:45.350574 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/250c4f7b-fa61-4469-b3af-2ac66ad11387-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "250c4f7b-fa61-4469-b3af-2ac66ad11387" (UID: "250c4f7b-fa61-4469-b3af-2ac66ad11387"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:30:45.397161 systemd[1]: Removed slice kubepods-burstable-pod250c4f7b_fa61_4469_b3af_2ac66ad11387.slice - libcontainer container kubepods-burstable-pod250c4f7b_fa61_4469_b3af_2ac66ad11387.slice. Dec 12 17:30:45.397267 systemd[1]: kubepods-burstable-pod250c4f7b_fa61_4469_b3af_2ac66ad11387.slice: Consumed 6.450s CPU time, 123M memory peak, 6.2M read from disk, 16.1M written to disk. Dec 12 17:30:45.398894 systemd[1]: Removed slice kubepods-besteffort-pod23d4d2f2_63bf_4628_afe1_d8900fae32e7.slice - libcontainer container kubepods-besteffort-pod23d4d2f2_63bf_4628_afe1_d8900fae32e7.slice. Dec 12 17:30:45.440960 kubelet[2687]: I1212 17:30:45.440907 2687 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 12 17:30:45.440960 kubelet[2687]: I1212 17:30:45.440943 2687 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rqzkz\" (UniqueName: \"kubernetes.io/projected/250c4f7b-fa61-4469-b3af-2ac66ad11387-kube-api-access-rqzkz\") on node \"localhost\" DevicePath \"\"" Dec 12 17:30:45.440960 kubelet[2687]: I1212 17:30:45.440955 2687 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 12 17:30:45.440960 kubelet[2687]: I1212 17:30:45.440964 2687 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 12 17:30:45.440960 kubelet[2687]: I1212 17:30:45.440982 2687 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 12 17:30:45.440960 kubelet[2687]: I1212 17:30:45.440991 2687 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 12 17:30:45.440960 kubelet[2687]: I1212 17:30:45.440999 2687 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/250c4f7b-fa61-4469-b3af-2ac66ad11387-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 12 17:30:45.440960 kubelet[2687]: I1212 17:30:45.441006 2687 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 12 17:30:45.441270 kubelet[2687]: I1212 17:30:45.441014 2687 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 12 17:30:45.441270 kubelet[2687]: I1212 17:30:45.441021 2687 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/250c4f7b-fa61-4469-b3af-2ac66ad11387-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 12 17:30:45.441270 kubelet[2687]: I1212 17:30:45.441028 2687 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 12 17:30:45.441270 kubelet[2687]: I1212 17:30:45.441035 2687 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/250c4f7b-fa61-4469-b3af-2ac66ad11387-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 12 17:30:45.441270 kubelet[2687]: I1212 17:30:45.441042 2687 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 12 17:30:45.441270 kubelet[2687]: I1212 17:30:45.441052 2687 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-82tmn\" (UniqueName: \"kubernetes.io/projected/23d4d2f2-63bf-4628-afe1-d8900fae32e7-kube-api-access-82tmn\") on node \"localhost\" DevicePath \"\"" Dec 12 17:30:45.441270 kubelet[2687]: I1212 17:30:45.441059 2687 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23d4d2f2-63bf-4628-afe1-d8900fae32e7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 12 17:30:45.441270 kubelet[2687]: I1212 17:30:45.441066 2687 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/250c4f7b-fa61-4469-b3af-2ac66ad11387-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 12 17:30:45.649526 kubelet[2687]: I1212 17:30:45.649416 2687 scope.go:117] "RemoveContainer" containerID="0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5" Dec 12 17:30:45.657296 containerd[1527]: time="2025-12-12T17:30:45.655965163Z" level=info msg="RemoveContainer for \"0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5\"" Dec 12 17:30:45.668261 containerd[1527]: time="2025-12-12T17:30:45.668214408Z" level=info msg="RemoveContainer for \"0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5\" returns successfully" Dec 12 17:30:45.670500 kubelet[2687]: I1212 17:30:45.670460 2687 scope.go:117] "RemoveContainer" containerID="14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a" Dec 12 17:30:45.676121 containerd[1527]: time="2025-12-12T17:30:45.676082225Z" level=info msg="RemoveContainer for \"14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a\"" Dec 12 17:30:45.682858 containerd[1527]: time="2025-12-12T17:30:45.682778246Z" level=info msg="RemoveContainer for \"14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a\" returns successfully" Dec 12 17:30:45.684568 kubelet[2687]: I1212 17:30:45.684502 2687 scope.go:117] "RemoveContainer" containerID="7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a" Dec 12 17:30:45.689039 containerd[1527]: time="2025-12-12T17:30:45.688434550Z" level=info msg="RemoveContainer for \"7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a\"" Dec 12 17:30:45.693140 containerd[1527]: time="2025-12-12T17:30:45.693082816Z" level=info msg="RemoveContainer for \"7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a\" returns successfully" Dec 12 17:30:45.693395 kubelet[2687]: I1212 17:30:45.693362 2687 scope.go:117] "RemoveContainer" containerID="fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68" Dec 12 17:30:45.696804 containerd[1527]: time="2025-12-12T17:30:45.696751206Z" level=info msg="RemoveContainer for \"fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68\"" Dec 12 17:30:45.701802 containerd[1527]: time="2025-12-12T17:30:45.701203833Z" level=info msg="RemoveContainer for \"fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68\" returns successfully" Dec 12 17:30:45.702019 kubelet[2687]: I1212 17:30:45.701517 2687 scope.go:117] "RemoveContainer" containerID="586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755" Dec 12 17:30:45.705870 containerd[1527]: time="2025-12-12T17:30:45.705829779Z" level=info msg="RemoveContainer for \"586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755\"" Dec 12 17:30:45.709138 containerd[1527]: time="2025-12-12T17:30:45.709092850Z" level=info msg="RemoveContainer for \"586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755\" returns successfully" Dec 12 17:30:45.709521 kubelet[2687]: I1212 17:30:45.709459 2687 scope.go:117] "RemoveContainer" containerID="0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5" Dec 12 17:30:45.715576 containerd[1527]: time="2025-12-12T17:30:45.709813088Z" level=error msg="ContainerStatus for \"0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5\": not found" Dec 12 17:30:45.715871 kubelet[2687]: E1212 17:30:45.715839 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5\": not found" containerID="0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5" Dec 12 17:30:45.716060 kubelet[2687]: I1212 17:30:45.715998 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5"} err="failed to get container status \"0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d7e192c4bdf4df9fefb41b03133c874bb3d95440608696a939b66ea179c07e5\": not found" Dec 12 17:30:45.716147 kubelet[2687]: I1212 17:30:45.716135 2687 scope.go:117] "RemoveContainer" containerID="14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a" Dec 12 17:30:45.716556 containerd[1527]: time="2025-12-12T17:30:45.716518148Z" level=error msg="ContainerStatus for \"14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a\": not found" Dec 12 17:30:45.716733 kubelet[2687]: E1212 17:30:45.716679 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a\": not found" containerID="14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a" Dec 12 17:30:45.716839 kubelet[2687]: I1212 17:30:45.716821 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a"} err="failed to get container status \"14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a\": rpc error: code = NotFound desc = an error occurred when try to find container \"14f0bbef00a385bbe556c1f4dad8f61e80e68b8bdcbd42489955f0d971e4520a\": not found" Dec 12 17:30:45.716934 kubelet[2687]: I1212 17:30:45.716891 2687 scope.go:117] "RemoveContainer" containerID="7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a" Dec 12 17:30:45.717157 containerd[1527]: time="2025-12-12T17:30:45.717126027Z" level=error msg="ContainerStatus for \"7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a\": not found" Dec 12 17:30:45.717341 kubelet[2687]: E1212 17:30:45.717286 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a\": not found" containerID="7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a" Dec 12 17:30:45.717434 kubelet[2687]: I1212 17:30:45.717415 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a"} err="failed to get container status \"7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a\": rpc error: code = NotFound desc = an error occurred when try to find container \"7faa91a66c8f1b428c78959518b6230c6f5a6d25c1e94f668ca62573cbe7f01a\": not found" Dec 12 17:30:45.717546 kubelet[2687]: I1212 17:30:45.717500 2687 scope.go:117] "RemoveContainer" containerID="fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68" Dec 12 17:30:45.717773 containerd[1527]: time="2025-12-12T17:30:45.717704745Z" level=error msg="ContainerStatus for \"fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68\": not found" Dec 12 17:30:45.718099 kubelet[2687]: E1212 17:30:45.718027 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68\": not found" containerID="fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68" Dec 12 17:30:45.718099 kubelet[2687]: I1212 17:30:45.718053 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68"} err="failed to get container status \"fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68\": rpc error: code = NotFound desc = an error occurred when try to find container \"fbd4043fabd36f285a0f97ce7d8df5439176c23a8f85e2a66c00832b4bf8ff68\": not found" Dec 12 17:30:45.718099 kubelet[2687]: I1212 17:30:45.718067 2687 scope.go:117] "RemoveContainer" containerID="586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755" Dec 12 17:30:45.718381 containerd[1527]: time="2025-12-12T17:30:45.718354743Z" level=error msg="ContainerStatus for \"586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755\": not found" Dec 12 17:30:45.718519 kubelet[2687]: E1212 17:30:45.718499 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755\": not found" containerID="586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755" Dec 12 17:30:45.718627 kubelet[2687]: I1212 17:30:45.718611 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755"} err="failed to get container status \"586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755\": rpc error: code = NotFound desc = an error occurred when try to find container \"586a26287b8f042a41832e5b2cb91e0e77409a91af2d593510ab95387f0ab755\": not found" Dec 12 17:30:45.718688 kubelet[2687]: I1212 17:30:45.718677 2687 scope.go:117] "RemoveContainer" containerID="5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295" Dec 12 17:30:45.720391 containerd[1527]: time="2025-12-12T17:30:45.720359497Z" level=info msg="RemoveContainer for \"5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295\"" Dec 12 17:30:45.723613 containerd[1527]: time="2025-12-12T17:30:45.723571088Z" level=info msg="RemoveContainer for \"5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295\" returns successfully" Dec 12 17:30:45.723826 kubelet[2687]: I1212 17:30:45.723802 2687 scope.go:117] "RemoveContainer" containerID="5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295" Dec 12 17:30:45.724266 containerd[1527]: time="2025-12-12T17:30:45.724232966Z" level=error msg="ContainerStatus for \"5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295\": not found" Dec 12 17:30:45.724522 kubelet[2687]: E1212 17:30:45.724396 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295\": not found" containerID="5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295" Dec 12 17:30:45.724522 kubelet[2687]: I1212 17:30:45.724428 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295"} err="failed to get container status \"5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295\": rpc error: code = NotFound desc = an error occurred when try to find container \"5bc9dc4a8a79e0cafc4af8cb0974332e8f356adbb228206daeca03753d40b295\": not found" Dec 12 17:30:46.029046 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ca455ad0b05c8663181e960b8ea0735cb67771d09bd63da83997b8d55a54928-shm.mount: Deactivated successfully. Dec 12 17:30:46.029157 systemd[1]: var-lib-kubelet-pods-23d4d2f2\x2d63bf\x2d4628\x2dafe1\x2dd8900fae32e7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d82tmn.mount: Deactivated successfully. Dec 12 17:30:46.029218 systemd[1]: var-lib-kubelet-pods-250c4f7b\x2dfa61\x2d4469\x2db3af\x2d2ac66ad11387-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drqzkz.mount: Deactivated successfully. Dec 12 17:30:46.029275 systemd[1]: var-lib-kubelet-pods-250c4f7b\x2dfa61\x2d4469\x2db3af\x2d2ac66ad11387-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 12 17:30:46.029322 systemd[1]: var-lib-kubelet-pods-250c4f7b\x2dfa61\x2d4469\x2db3af\x2d2ac66ad11387-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 12 17:30:46.910590 sshd[4333]: Connection closed by 10.0.0.1 port 51656 Dec 12 17:30:46.912416 sshd-session[4329]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:46.924056 systemd[1]: sshd@23-10.0.0.53:22-10.0.0.1:51656.service: Deactivated successfully. Dec 12 17:30:46.927143 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 17:30:46.927879 systemd[1]: session-24.scope: Consumed 1.389s CPU time, 24.8M memory peak. Dec 12 17:30:46.930188 systemd-logind[1505]: Session 24 logged out. Waiting for processes to exit. Dec 12 17:30:46.935231 systemd[1]: Started sshd@24-10.0.0.53:22-10.0.0.1:51658.service - OpenSSH per-connection server daemon (10.0.0.1:51658). Dec 12 17:30:46.935784 systemd-logind[1505]: Removed session 24. Dec 12 17:30:46.993203 sshd[4481]: Accepted publickey for core from 10.0.0.1 port 51658 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:30:46.994525 sshd-session[4481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:46.999369 systemd-logind[1505]: New session 25 of user core. Dec 12 17:30:47.010126 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 17:30:47.392356 kubelet[2687]: I1212 17:30:47.392302 2687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23d4d2f2-63bf-4628-afe1-d8900fae32e7" path="/var/lib/kubelet/pods/23d4d2f2-63bf-4628-afe1-d8900fae32e7/volumes" Dec 12 17:30:47.392827 kubelet[2687]: I1212 17:30:47.392794 2687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="250c4f7b-fa61-4469-b3af-2ac66ad11387" path="/var/lib/kubelet/pods/250c4f7b-fa61-4469-b3af-2ac66ad11387/volumes" Dec 12 17:30:48.454256 kubelet[2687]: E1212 17:30:48.454205 2687 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 12 17:30:48.681521 sshd[4484]: Connection closed by 10.0.0.1 port 51658 Dec 12 17:30:48.683186 sshd-session[4481]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:48.692795 systemd[1]: sshd@24-10.0.0.53:22-10.0.0.1:51658.service: Deactivated successfully. Dec 12 17:30:48.695701 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 17:30:48.695901 systemd[1]: session-25.scope: Consumed 1.562s CPU time, 26.5M memory peak. Dec 12 17:30:48.697317 systemd-logind[1505]: Session 25 logged out. Waiting for processes to exit. Dec 12 17:30:48.702556 systemd[1]: Started sshd@25-10.0.0.53:22-10.0.0.1:51668.service - OpenSSH per-connection server daemon (10.0.0.1:51668). Dec 12 17:30:48.705745 systemd-logind[1505]: Removed session 25. Dec 12 17:30:48.717337 systemd[1]: Created slice kubepods-burstable-pod7a77ebe3_caaf_4f00_8781_3eb6c8178450.slice - libcontainer container kubepods-burstable-pod7a77ebe3_caaf_4f00_8781_3eb6c8178450.slice. Dec 12 17:30:48.766152 sshd[4495]: Accepted publickey for core from 10.0.0.1 port 51668 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:30:48.767288 sshd-session[4495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:48.771296 systemd-logind[1505]: New session 26 of user core. Dec 12 17:30:48.786145 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 12 17:30:48.835529 sshd[4498]: Connection closed by 10.0.0.1 port 51668 Dec 12 17:30:48.835839 sshd-session[4495]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:48.854137 systemd[1]: sshd@25-10.0.0.53:22-10.0.0.1:51668.service: Deactivated successfully. Dec 12 17:30:48.855690 systemd[1]: session-26.scope: Deactivated successfully. Dec 12 17:30:48.856369 systemd-logind[1505]: Session 26 logged out. Waiting for processes to exit. Dec 12 17:30:48.858365 systemd[1]: Started sshd@26-10.0.0.53:22-10.0.0.1:51682.service - OpenSSH per-connection server daemon (10.0.0.1:51682). Dec 12 17:30:48.859329 systemd-logind[1505]: Removed session 26. Dec 12 17:30:48.862113 kubelet[2687]: I1212 17:30:48.862072 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a77ebe3-caaf-4f00-8781-3eb6c8178450-cilium-run\") pod \"cilium-ct5br\" (UID: \"7a77ebe3-caaf-4f00-8781-3eb6c8178450\") " pod="kube-system/cilium-ct5br" Dec 12 17:30:48.862113 kubelet[2687]: I1212 17:30:48.862111 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a77ebe3-caaf-4f00-8781-3eb6c8178450-xtables-lock\") pod \"cilium-ct5br\" (UID: \"7a77ebe3-caaf-4f00-8781-3eb6c8178450\") " pod="kube-system/cilium-ct5br" Dec 12 17:30:48.862222 kubelet[2687]: I1212 17:30:48.862132 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a77ebe3-caaf-4f00-8781-3eb6c8178450-bpf-maps\") pod \"cilium-ct5br\" (UID: \"7a77ebe3-caaf-4f00-8781-3eb6c8178450\") " pod="kube-system/cilium-ct5br" Dec 12 17:30:48.862222 kubelet[2687]: I1212 17:30:48.862146 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a77ebe3-caaf-4f00-8781-3eb6c8178450-cilium-cgroup\") pod \"cilium-ct5br\" (UID: \"7a77ebe3-caaf-4f00-8781-3eb6c8178450\") " pod="kube-system/cilium-ct5br" Dec 12 17:30:48.862222 kubelet[2687]: I1212 17:30:48.862160 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a77ebe3-caaf-4f00-8781-3eb6c8178450-lib-modules\") pod \"cilium-ct5br\" (UID: \"7a77ebe3-caaf-4f00-8781-3eb6c8178450\") " pod="kube-system/cilium-ct5br" Dec 12 17:30:48.862222 kubelet[2687]: I1212 17:30:48.862174 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a77ebe3-caaf-4f00-8781-3eb6c8178450-cni-path\") pod \"cilium-ct5br\" (UID: \"7a77ebe3-caaf-4f00-8781-3eb6c8178450\") " pod="kube-system/cilium-ct5br" Dec 12 17:30:48.862222 kubelet[2687]: I1212 17:30:48.862189 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a77ebe3-caaf-4f00-8781-3eb6c8178450-cilium-config-path\") pod \"cilium-ct5br\" (UID: \"7a77ebe3-caaf-4f00-8781-3eb6c8178450\") " pod="kube-system/cilium-ct5br" Dec 12 17:30:48.862222 kubelet[2687]: I1212 17:30:48.862206 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a77ebe3-caaf-4f00-8781-3eb6c8178450-host-proc-sys-net\") pod \"cilium-ct5br\" (UID: \"7a77ebe3-caaf-4f00-8781-3eb6c8178450\") " pod="kube-system/cilium-ct5br" Dec 12 17:30:48.862345 kubelet[2687]: I1212 17:30:48.862220 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a77ebe3-caaf-4f00-8781-3eb6c8178450-hostproc\") pod \"cilium-ct5br\" (UID: \"7a77ebe3-caaf-4f00-8781-3eb6c8178450\") " pod="kube-system/cilium-ct5br" Dec 12 17:30:48.862345 kubelet[2687]: I1212 17:30:48.862235 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a77ebe3-caaf-4f00-8781-3eb6c8178450-etc-cni-netd\") pod \"cilium-ct5br\" (UID: \"7a77ebe3-caaf-4f00-8781-3eb6c8178450\") " pod="kube-system/cilium-ct5br" Dec 12 17:30:48.862345 kubelet[2687]: I1212 17:30:48.862249 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a77ebe3-caaf-4f00-8781-3eb6c8178450-clustermesh-secrets\") pod \"cilium-ct5br\" (UID: \"7a77ebe3-caaf-4f00-8781-3eb6c8178450\") " pod="kube-system/cilium-ct5br" Dec 12 17:30:48.862345 kubelet[2687]: I1212 17:30:48.862264 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a77ebe3-caaf-4f00-8781-3eb6c8178450-hubble-tls\") pod \"cilium-ct5br\" (UID: \"7a77ebe3-caaf-4f00-8781-3eb6c8178450\") " pod="kube-system/cilium-ct5br" Dec 12 17:30:48.862345 kubelet[2687]: I1212 17:30:48.862278 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7a77ebe3-caaf-4f00-8781-3eb6c8178450-cilium-ipsec-secrets\") pod \"cilium-ct5br\" (UID: \"7a77ebe3-caaf-4f00-8781-3eb6c8178450\") " pod="kube-system/cilium-ct5br" Dec 12 17:30:48.862345 kubelet[2687]: I1212 17:30:48.862295 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a77ebe3-caaf-4f00-8781-3eb6c8178450-host-proc-sys-kernel\") pod \"cilium-ct5br\" (UID: \"7a77ebe3-caaf-4f00-8781-3eb6c8178450\") " pod="kube-system/cilium-ct5br" Dec 12 17:30:48.862474 kubelet[2687]: I1212 17:30:48.862310 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76ssk\" (UniqueName: \"kubernetes.io/projected/7a77ebe3-caaf-4f00-8781-3eb6c8178450-kube-api-access-76ssk\") pod \"cilium-ct5br\" (UID: \"7a77ebe3-caaf-4f00-8781-3eb6c8178450\") " pod="kube-system/cilium-ct5br" Dec 12 17:30:48.916138 sshd[4505]: Accepted publickey for core from 10.0.0.1 port 51682 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:30:48.917418 sshd-session[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:48.922024 systemd-logind[1505]: New session 27 of user core. Dec 12 17:30:48.928142 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 12 17:30:49.031597 containerd[1527]: time="2025-12-12T17:30:49.030933661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ct5br,Uid:7a77ebe3-caaf-4f00-8781-3eb6c8178450,Namespace:kube-system,Attempt:0,}" Dec 12 17:30:49.069653 containerd[1527]: time="2025-12-12T17:30:49.069605216Z" level=info msg="connecting to shim e53def1fc9586e0a7cc1f882ab582405104515ae1897c7ebf61ec2a7f4fa65d9" address="unix:///run/containerd/s/e4c11b84c4b431f41c45da6b601cec31ea00c0b8e3e71e26475115d16eaacf93" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:30:49.099180 systemd[1]: Started cri-containerd-e53def1fc9586e0a7cc1f882ab582405104515ae1897c7ebf61ec2a7f4fa65d9.scope - libcontainer container e53def1fc9586e0a7cc1f882ab582405104515ae1897c7ebf61ec2a7f4fa65d9. Dec 12 17:30:49.120832 containerd[1527]: time="2025-12-12T17:30:49.120789771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ct5br,Uid:7a77ebe3-caaf-4f00-8781-3eb6c8178450,Namespace:kube-system,Attempt:0,} returns sandbox id \"e53def1fc9586e0a7cc1f882ab582405104515ae1897c7ebf61ec2a7f4fa65d9\"" Dec 12 17:30:49.127220 containerd[1527]: time="2025-12-12T17:30:49.127173631Z" level=info msg="CreateContainer within sandbox \"e53def1fc9586e0a7cc1f882ab582405104515ae1897c7ebf61ec2a7f4fa65d9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 17:30:49.140915 containerd[1527]: time="2025-12-12T17:30:49.140302869Z" level=info msg="Container c0a512a507298466c02e2cb8b7e0b62b59aeefb1d26698dbdf041a82d7e8007b: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:30:49.146733 containerd[1527]: time="2025-12-12T17:30:49.146676488Z" level=info msg="CreateContainer within sandbox \"e53def1fc9586e0a7cc1f882ab582405104515ae1897c7ebf61ec2a7f4fa65d9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c0a512a507298466c02e2cb8b7e0b62b59aeefb1d26698dbdf041a82d7e8007b\"" Dec 12 17:30:49.147667 containerd[1527]: time="2025-12-12T17:30:49.147640165Z" level=info msg="StartContainer for \"c0a512a507298466c02e2cb8b7e0b62b59aeefb1d26698dbdf041a82d7e8007b\"" Dec 12 17:30:49.149449 containerd[1527]: time="2025-12-12T17:30:49.149410479Z" level=info msg="connecting to shim c0a512a507298466c02e2cb8b7e0b62b59aeefb1d26698dbdf041a82d7e8007b" address="unix:///run/containerd/s/e4c11b84c4b431f41c45da6b601cec31ea00c0b8e3e71e26475115d16eaacf93" protocol=ttrpc version=3 Dec 12 17:30:49.168186 systemd[1]: Started cri-containerd-c0a512a507298466c02e2cb8b7e0b62b59aeefb1d26698dbdf041a82d7e8007b.scope - libcontainer container c0a512a507298466c02e2cb8b7e0b62b59aeefb1d26698dbdf041a82d7e8007b. Dec 12 17:30:49.197639 containerd[1527]: time="2025-12-12T17:30:49.197586244Z" level=info msg="StartContainer for \"c0a512a507298466c02e2cb8b7e0b62b59aeefb1d26698dbdf041a82d7e8007b\" returns successfully" Dec 12 17:30:49.206248 systemd[1]: cri-containerd-c0a512a507298466c02e2cb8b7e0b62b59aeefb1d26698dbdf041a82d7e8007b.scope: Deactivated successfully. Dec 12 17:30:49.208693 containerd[1527]: time="2025-12-12T17:30:49.208651089Z" level=info msg="received container exit event container_id:\"c0a512a507298466c02e2cb8b7e0b62b59aeefb1d26698dbdf041a82d7e8007b\" id:\"c0a512a507298466c02e2cb8b7e0b62b59aeefb1d26698dbdf041a82d7e8007b\" pid:4579 exited_at:{seconds:1765560649 nanos:208264850}" Dec 12 17:30:49.680317 containerd[1527]: time="2025-12-12T17:30:49.679047616Z" level=info msg="CreateContainer within sandbox \"e53def1fc9586e0a7cc1f882ab582405104515ae1897c7ebf61ec2a7f4fa65d9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 17:30:49.696022 containerd[1527]: time="2025-12-12T17:30:49.695961441Z" level=info msg="Container 7f119f7d0f93407814617ce7f14ce1628dde9e8e7ffb4cd5ae1986dbea65309f: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:30:49.705760 containerd[1527]: time="2025-12-12T17:30:49.705683050Z" level=info msg="CreateContainer within sandbox \"e53def1fc9586e0a7cc1f882ab582405104515ae1897c7ebf61ec2a7f4fa65d9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7f119f7d0f93407814617ce7f14ce1628dde9e8e7ffb4cd5ae1986dbea65309f\"" Dec 12 17:30:49.706338 containerd[1527]: time="2025-12-12T17:30:49.706300008Z" level=info msg="StartContainer for \"7f119f7d0f93407814617ce7f14ce1628dde9e8e7ffb4cd5ae1986dbea65309f\"" Dec 12 17:30:49.707353 containerd[1527]: time="2025-12-12T17:30:49.707306445Z" level=info msg="connecting to shim 7f119f7d0f93407814617ce7f14ce1628dde9e8e7ffb4cd5ae1986dbea65309f" address="unix:///run/containerd/s/e4c11b84c4b431f41c45da6b601cec31ea00c0b8e3e71e26475115d16eaacf93" protocol=ttrpc version=3 Dec 12 17:30:49.740223 systemd[1]: Started cri-containerd-7f119f7d0f93407814617ce7f14ce1628dde9e8e7ffb4cd5ae1986dbea65309f.scope - libcontainer container 7f119f7d0f93407814617ce7f14ce1628dde9e8e7ffb4cd5ae1986dbea65309f. Dec 12 17:30:49.769139 containerd[1527]: time="2025-12-12T17:30:49.769026086Z" level=info msg="StartContainer for \"7f119f7d0f93407814617ce7f14ce1628dde9e8e7ffb4cd5ae1986dbea65309f\" returns successfully" Dec 12 17:30:49.776229 systemd[1]: cri-containerd-7f119f7d0f93407814617ce7f14ce1628dde9e8e7ffb4cd5ae1986dbea65309f.scope: Deactivated successfully. Dec 12 17:30:49.779532 containerd[1527]: time="2025-12-12T17:30:49.779235533Z" level=info msg="received container exit event container_id:\"7f119f7d0f93407814617ce7f14ce1628dde9e8e7ffb4cd5ae1986dbea65309f\" id:\"7f119f7d0f93407814617ce7f14ce1628dde9e8e7ffb4cd5ae1986dbea65309f\" pid:4624 exited_at:{seconds:1765560649 nanos:779018494}" Dec 12 17:30:50.680698 containerd[1527]: time="2025-12-12T17:30:50.680655742Z" level=info msg="CreateContainer within sandbox \"e53def1fc9586e0a7cc1f882ab582405104515ae1897c7ebf61ec2a7f4fa65d9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 17:30:50.694470 containerd[1527]: time="2025-12-12T17:30:50.693393100Z" level=info msg="Container 710e7cdd14ab6708e678a1b403734902a564fec0b26b8f7fb1cd85ea5c8f6871: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:30:50.703335 containerd[1527]: time="2025-12-12T17:30:50.703287427Z" level=info msg="CreateContainer within sandbox \"e53def1fc9586e0a7cc1f882ab582405104515ae1897c7ebf61ec2a7f4fa65d9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"710e7cdd14ab6708e678a1b403734902a564fec0b26b8f7fb1cd85ea5c8f6871\"" Dec 12 17:30:50.703997 containerd[1527]: time="2025-12-12T17:30:50.703960825Z" level=info msg="StartContainer for \"710e7cdd14ab6708e678a1b403734902a564fec0b26b8f7fb1cd85ea5c8f6871\"" Dec 12 17:30:50.705369 containerd[1527]: time="2025-12-12T17:30:50.705345980Z" level=info msg="connecting to shim 710e7cdd14ab6708e678a1b403734902a564fec0b26b8f7fb1cd85ea5c8f6871" address="unix:///run/containerd/s/e4c11b84c4b431f41c45da6b601cec31ea00c0b8e3e71e26475115d16eaacf93" protocol=ttrpc version=3 Dec 12 17:30:50.728147 systemd[1]: Started cri-containerd-710e7cdd14ab6708e678a1b403734902a564fec0b26b8f7fb1cd85ea5c8f6871.scope - libcontainer container 710e7cdd14ab6708e678a1b403734902a564fec0b26b8f7fb1cd85ea5c8f6871. Dec 12 17:30:50.784873 systemd[1]: cri-containerd-710e7cdd14ab6708e678a1b403734902a564fec0b26b8f7fb1cd85ea5c8f6871.scope: Deactivated successfully. Dec 12 17:30:50.786258 containerd[1527]: time="2025-12-12T17:30:50.786219554Z" level=info msg="received container exit event container_id:\"710e7cdd14ab6708e678a1b403734902a564fec0b26b8f7fb1cd85ea5c8f6871\" id:\"710e7cdd14ab6708e678a1b403734902a564fec0b26b8f7fb1cd85ea5c8f6871\" pid:4669 exited_at:{seconds:1765560650 nanos:786042834}" Dec 12 17:30:50.786654 containerd[1527]: time="2025-12-12T17:30:50.786582993Z" level=info msg="StartContainer for \"710e7cdd14ab6708e678a1b403734902a564fec0b26b8f7fb1cd85ea5c8f6871\" returns successfully" Dec 12 17:30:50.808374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-710e7cdd14ab6708e678a1b403734902a564fec0b26b8f7fb1cd85ea5c8f6871-rootfs.mount: Deactivated successfully. Dec 12 17:30:51.685414 containerd[1527]: time="2025-12-12T17:30:51.685340423Z" level=info msg="CreateContainer within sandbox \"e53def1fc9586e0a7cc1f882ab582405104515ae1897c7ebf61ec2a7f4fa65d9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 17:30:51.701439 containerd[1527]: time="2025-12-12T17:30:51.701374929Z" level=info msg="Container 3565d4e3b9cb6d68219872087fad5eef5a86e4de63b3528fa1cb5f8bfb3eda02: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:30:51.711698 containerd[1527]: time="2025-12-12T17:30:51.711624374Z" level=info msg="CreateContainer within sandbox \"e53def1fc9586e0a7cc1f882ab582405104515ae1897c7ebf61ec2a7f4fa65d9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3565d4e3b9cb6d68219872087fad5eef5a86e4de63b3528fa1cb5f8bfb3eda02\"" Dec 12 17:30:51.712877 containerd[1527]: time="2025-12-12T17:30:51.712759890Z" level=info msg="StartContainer for \"3565d4e3b9cb6d68219872087fad5eef5a86e4de63b3528fa1cb5f8bfb3eda02\"" Dec 12 17:30:51.715308 containerd[1527]: time="2025-12-12T17:30:51.715118762Z" level=info msg="connecting to shim 3565d4e3b9cb6d68219872087fad5eef5a86e4de63b3528fa1cb5f8bfb3eda02" address="unix:///run/containerd/s/e4c11b84c4b431f41c45da6b601cec31ea00c0b8e3e71e26475115d16eaacf93" protocol=ttrpc version=3 Dec 12 17:30:51.749199 systemd[1]: Started cri-containerd-3565d4e3b9cb6d68219872087fad5eef5a86e4de63b3528fa1cb5f8bfb3eda02.scope - libcontainer container 3565d4e3b9cb6d68219872087fad5eef5a86e4de63b3528fa1cb5f8bfb3eda02. Dec 12 17:30:51.778248 systemd[1]: cri-containerd-3565d4e3b9cb6d68219872087fad5eef5a86e4de63b3528fa1cb5f8bfb3eda02.scope: Deactivated successfully. Dec 12 17:30:51.780123 containerd[1527]: time="2025-12-12T17:30:51.780050824Z" level=info msg="received container exit event container_id:\"3565d4e3b9cb6d68219872087fad5eef5a86e4de63b3528fa1cb5f8bfb3eda02\" id:\"3565d4e3b9cb6d68219872087fad5eef5a86e4de63b3528fa1cb5f8bfb3eda02\" pid:4708 exited_at:{seconds:1765560651 nanos:779870184}" Dec 12 17:30:51.795858 containerd[1527]: time="2025-12-12T17:30:51.795807451Z" level=info msg="StartContainer for \"3565d4e3b9cb6d68219872087fad5eef5a86e4de63b3528fa1cb5f8bfb3eda02\" returns successfully" Dec 12 17:30:51.810677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3565d4e3b9cb6d68219872087fad5eef5a86e4de63b3528fa1cb5f8bfb3eda02-rootfs.mount: Deactivated successfully. Dec 12 17:30:52.389719 kubelet[2687]: E1212 17:30:52.389660 2687 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-pr8sn" podUID="f0982e72-a4b9-4869-b251-e9b4f6b47659" Dec 12 17:30:52.754909 containerd[1527]: time="2025-12-12T17:30:52.754842888Z" level=info msg="CreateContainer within sandbox \"e53def1fc9586e0a7cc1f882ab582405104515ae1897c7ebf61ec2a7f4fa65d9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 17:30:52.781066 containerd[1527]: time="2025-12-12T17:30:52.781017798Z" level=info msg="Container cf0cb7d5f499629f1b7d19db4a048e75e1c48f1c9e4e8e06cc63f5f9effbaa23: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:30:52.799696 containerd[1527]: time="2025-12-12T17:30:52.799651534Z" level=info msg="CreateContainer within sandbox \"e53def1fc9586e0a7cc1f882ab582405104515ae1897c7ebf61ec2a7f4fa65d9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cf0cb7d5f499629f1b7d19db4a048e75e1c48f1c9e4e8e06cc63f5f9effbaa23\"" Dec 12 17:30:52.800714 containerd[1527]: time="2025-12-12T17:30:52.800521211Z" level=info msg="StartContainer for \"cf0cb7d5f499629f1b7d19db4a048e75e1c48f1c9e4e8e06cc63f5f9effbaa23\"" Dec 12 17:30:52.801947 containerd[1527]: time="2025-12-12T17:30:52.801912886Z" level=info msg="connecting to shim cf0cb7d5f499629f1b7d19db4a048e75e1c48f1c9e4e8e06cc63f5f9effbaa23" address="unix:///run/containerd/s/e4c11b84c4b431f41c45da6b601cec31ea00c0b8e3e71e26475115d16eaacf93" protocol=ttrpc version=3 Dec 12 17:30:52.824214 systemd[1]: Started cri-containerd-cf0cb7d5f499629f1b7d19db4a048e75e1c48f1c9e4e8e06cc63f5f9effbaa23.scope - libcontainer container cf0cb7d5f499629f1b7d19db4a048e75e1c48f1c9e4e8e06cc63f5f9effbaa23. Dec 12 17:30:52.874337 containerd[1527]: time="2025-12-12T17:30:52.874288477Z" level=info msg="StartContainer for \"cf0cb7d5f499629f1b7d19db4a048e75e1c48f1c9e4e8e06cc63f5f9effbaa23\" returns successfully" Dec 12 17:30:53.164037 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 12 17:30:53.707138 kubelet[2687]: I1212 17:30:53.706842 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ct5br" podStartSLOduration=5.706825086 podStartE2EDuration="5.706825086s" podCreationTimestamp="2025-12-12 17:30:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:30:53.706693607 +0000 UTC m=+90.417755603" watchObservedRunningTime="2025-12-12 17:30:53.706825086 +0000 UTC m=+90.417887042" Dec 12 17:30:56.043018 systemd-networkd[1454]: lxc_health: Link UP Dec 12 17:30:56.052309 systemd-networkd[1454]: lxc_health: Gained carrier Dec 12 17:30:57.498233 systemd-networkd[1454]: lxc_health: Gained IPv6LL Dec 12 17:31:01.704594 kubelet[2687]: E1212 17:31:01.704335 2687 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59276->127.0.0.1:41891: write tcp 127.0.0.1:59276->127.0.0.1:41891: write: broken pipe Dec 12 17:31:01.708637 sshd[4508]: Connection closed by 10.0.0.1 port 51682 Dec 12 17:31:01.707809 sshd-session[4505]: pam_unix(sshd:session): session closed for user core Dec 12 17:31:01.711448 systemd[1]: sshd@26-10.0.0.53:22-10.0.0.1:51682.service: Deactivated successfully. Dec 12 17:31:01.713414 systemd[1]: session-27.scope: Deactivated successfully. Dec 12 17:31:01.714197 systemd-logind[1505]: Session 27 logged out. Waiting for processes to exit. Dec 12 17:31:01.715528 systemd-logind[1505]: Removed session 27.