Dec 16 12:23:39.790703 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 16 12:23:39.790725 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 16 12:23:39.790734 kernel: KASLR enabled Dec 16 12:23:39.790740 kernel: efi: EFI v2.7 by EDK II Dec 16 12:23:39.790745 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Dec 16 12:23:39.790751 kernel: random: crng init done Dec 16 12:23:39.790758 kernel: secureboot: Secure boot disabled Dec 16 12:23:39.790763 kernel: ACPI: Early table checksum verification disabled Dec 16 12:23:39.790769 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Dec 16 12:23:39.790776 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 16 12:23:39.790782 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:23:39.790787 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:23:39.790793 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:23:39.790799 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:23:39.790806 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:23:39.790813 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:23:39.790820 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:23:39.790826 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:23:39.790832 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:23:39.790837 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 16 12:23:39.790843 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:23:39.790849 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:23:39.790855 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Dec 16 12:23:39.790861 kernel: Zone ranges: Dec 16 12:23:39.790867 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:23:39.790874 kernel: DMA32 empty Dec 16 12:23:39.790880 kernel: Normal empty Dec 16 12:23:39.790885 kernel: Device empty Dec 16 12:23:39.790891 kernel: Movable zone start for each node Dec 16 12:23:39.790897 kernel: Early memory node ranges Dec 16 12:23:39.790903 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Dec 16 12:23:39.790909 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Dec 16 12:23:39.790915 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Dec 16 12:23:39.790921 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Dec 16 12:23:39.790927 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Dec 16 12:23:39.790933 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Dec 16 12:23:39.790939 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Dec 16 12:23:39.790947 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Dec 16 12:23:39.790952 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Dec 16 12:23:39.790959 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 16 12:23:39.790967 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 16 12:23:39.790974 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 16 12:23:39.790981 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 16 12:23:39.790988 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:23:39.790995 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 16 12:23:39.791001 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Dec 16 12:23:39.791007 kernel: psci: probing for conduit method from ACPI. Dec 16 12:23:39.791013 kernel: psci: PSCIv1.1 detected in firmware. Dec 16 12:23:39.791020 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:23:39.791026 kernel: psci: Trusted OS migration not required Dec 16 12:23:39.791032 kernel: psci: SMC Calling Convention v1.1 Dec 16 12:23:39.791038 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 16 12:23:39.791044 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:23:39.791052 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:23:39.791058 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 16 12:23:39.791064 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:23:39.791071 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:23:39.791077 kernel: CPU features: detected: Spectre-v4 Dec 16 12:23:39.791083 kernel: CPU features: detected: Spectre-BHB Dec 16 12:23:39.791089 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:23:39.791096 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:23:39.791102 kernel: CPU features: detected: ARM erratum 1418040 Dec 16 12:23:39.791108 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:23:39.791114 kernel: alternatives: applying boot alternatives Dec 16 12:23:39.791121 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:23:39.791131 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:23:39.791148 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:23:39.791156 kernel: Fallback order for Node 0: 0 Dec 16 12:23:39.791162 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 16 12:23:39.791168 kernel: Policy zone: DMA Dec 16 12:23:39.791175 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:23:39.791181 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 16 12:23:39.791187 kernel: software IO TLB: area num 4. Dec 16 12:23:39.791193 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 16 12:23:39.791200 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Dec 16 12:23:39.791206 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 12:23:39.791214 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:23:39.791221 kernel: rcu: RCU event tracing is enabled. Dec 16 12:23:39.791228 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 12:23:39.791234 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:23:39.791240 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:23:39.791247 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:23:39.791253 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 12:23:39.791259 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:23:39.791266 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:23:39.791272 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:23:39.791278 kernel: GICv3: 256 SPIs implemented Dec 16 12:23:39.791286 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:23:39.791292 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:23:39.791299 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 16 12:23:39.791305 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 12:23:39.791311 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 16 12:23:39.791317 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 16 12:23:39.791324 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 16 12:23:39.791330 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 16 12:23:39.791336 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 16 12:23:39.791343 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 16 12:23:39.791349 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:23:39.791355 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:23:39.791363 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 16 12:23:39.791370 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 16 12:23:39.791376 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 16 12:23:39.791382 kernel: arm-pv: using stolen time PV Dec 16 12:23:39.791389 kernel: Console: colour dummy device 80x25 Dec 16 12:23:39.791396 kernel: ACPI: Core revision 20240827 Dec 16 12:23:39.791402 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 16 12:23:39.791409 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:23:39.791416 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:23:39.791422 kernel: landlock: Up and running. Dec 16 12:23:39.791430 kernel: SELinux: Initializing. Dec 16 12:23:39.791436 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:23:39.791443 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:23:39.791449 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:23:39.791456 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:23:39.791463 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:23:39.791476 kernel: Remapping and enabling EFI services. Dec 16 12:23:39.791482 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:23:39.791489 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:23:39.791502 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 16 12:23:39.791509 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 16 12:23:39.791516 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:23:39.791524 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 16 12:23:39.791531 kernel: Detected PIPT I-cache on CPU2 Dec 16 12:23:39.791538 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 16 12:23:39.791545 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 16 12:23:39.791552 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:23:39.791560 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 16 12:23:39.791567 kernel: Detected PIPT I-cache on CPU3 Dec 16 12:23:39.791574 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 16 12:23:39.791580 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 16 12:23:39.791587 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:23:39.791594 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 16 12:23:39.791601 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 12:23:39.791607 kernel: SMP: Total of 4 processors activated. Dec 16 12:23:39.791614 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:23:39.791622 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:23:39.791629 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:23:39.791636 kernel: CPU features: detected: Common not Private translations Dec 16 12:23:39.791643 kernel: CPU features: detected: CRC32 instructions Dec 16 12:23:39.791650 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 16 12:23:39.791656 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:23:39.791663 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:23:39.791670 kernel: CPU features: detected: Privileged Access Never Dec 16 12:23:39.791676 kernel: CPU features: detected: RAS Extension Support Dec 16 12:23:39.791685 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:23:39.791692 kernel: alternatives: applying system-wide alternatives Dec 16 12:23:39.791699 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 16 12:23:39.791706 kernel: Memory: 2423776K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 126176K reserved, 16384K cma-reserved) Dec 16 12:23:39.791713 kernel: devtmpfs: initialized Dec 16 12:23:39.791720 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:23:39.791727 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 12:23:39.791734 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:23:39.791740 kernel: 0 pages in range for non-PLT usage Dec 16 12:23:39.791749 kernel: 508400 pages in range for PLT usage Dec 16 12:23:39.791755 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:23:39.791762 kernel: SMBIOS 3.0.0 present. Dec 16 12:23:39.791769 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 16 12:23:39.791776 kernel: DMI: Memory slots populated: 1/1 Dec 16 12:23:39.791783 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:23:39.791789 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:23:39.791796 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:23:39.791803 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:23:39.791811 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:23:39.791818 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Dec 16 12:23:39.791825 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:23:39.791832 kernel: cpuidle: using governor menu Dec 16 12:23:39.791839 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:23:39.791845 kernel: ASID allocator initialised with 32768 entries Dec 16 12:23:39.791852 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:23:39.791859 kernel: Serial: AMBA PL011 UART driver Dec 16 12:23:39.791866 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:23:39.791874 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:23:39.791881 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:23:39.791888 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:23:39.791894 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:23:39.791901 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:23:39.791908 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:23:39.791915 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:23:39.791921 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:23:39.791928 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:23:39.791936 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:23:39.791943 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:23:39.791950 kernel: ACPI: Interpreter enabled Dec 16 12:23:39.791957 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:23:39.791964 kernel: ACPI: MCFG table detected, 1 entries Dec 16 12:23:39.791970 kernel: ACPI: CPU0 has been hot-added Dec 16 12:23:39.791977 kernel: ACPI: CPU1 has been hot-added Dec 16 12:23:39.791984 kernel: ACPI: CPU2 has been hot-added Dec 16 12:23:39.791991 kernel: ACPI: CPU3 has been hot-added Dec 16 12:23:39.791998 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:23:39.792006 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:23:39.792016 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 12:23:39.792168 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:23:39.792238 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 12:23:39.792299 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 12:23:39.792358 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 16 12:23:39.792416 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 16 12:23:39.792428 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 16 12:23:39.792435 kernel: PCI host bridge to bus 0000:00 Dec 16 12:23:39.792517 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 16 12:23:39.792590 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 12:23:39.792642 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 16 12:23:39.792695 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 12:23:39.792773 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:23:39.792850 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 12:23:39.792910 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 16 12:23:39.792969 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 16 12:23:39.793027 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 16 12:23:39.793085 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 16 12:23:39.793164 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 16 12:23:39.793230 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 16 12:23:39.793285 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 16 12:23:39.793336 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 12:23:39.793388 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 16 12:23:39.793397 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 12:23:39.793404 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 12:23:39.793411 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 12:23:39.793418 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 12:23:39.793427 kernel: iommu: Default domain type: Translated Dec 16 12:23:39.793434 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:23:39.793441 kernel: efivars: Registered efivars operations Dec 16 12:23:39.793448 kernel: vgaarb: loaded Dec 16 12:23:39.793455 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:23:39.793461 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:23:39.793476 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:23:39.793484 kernel: pnp: PnP ACPI init Dec 16 12:23:39.793550 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 16 12:23:39.793562 kernel: pnp: PnP ACPI: found 1 devices Dec 16 12:23:39.793570 kernel: NET: Registered PF_INET protocol family Dec 16 12:23:39.793576 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:23:39.793583 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:23:39.793590 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:23:39.793597 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:23:39.793604 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:23:39.793611 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:23:39.793620 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:23:39.793627 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:23:39.793634 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:23:39.793641 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:23:39.793647 kernel: kvm [1]: HYP mode not available Dec 16 12:23:39.793654 kernel: Initialise system trusted keyrings Dec 16 12:23:39.793661 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:23:39.793669 kernel: Key type asymmetric registered Dec 16 12:23:39.793675 kernel: Asymmetric key parser 'x509' registered Dec 16 12:23:39.793684 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:23:39.793691 kernel: io scheduler mq-deadline registered Dec 16 12:23:39.793698 kernel: io scheduler kyber registered Dec 16 12:23:39.793710 kernel: io scheduler bfq registered Dec 16 12:23:39.793717 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 12:23:39.793724 kernel: ACPI: button: Power Button [PWRB] Dec 16 12:23:39.793732 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 12:23:39.793792 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 16 12:23:39.793802 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:23:39.793810 kernel: thunder_xcv, ver 1.0 Dec 16 12:23:39.793818 kernel: thunder_bgx, ver 1.0 Dec 16 12:23:39.793825 kernel: nicpf, ver 1.0 Dec 16 12:23:39.793832 kernel: nicvf, ver 1.0 Dec 16 12:23:39.793901 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:23:39.793957 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:23:39 UTC (1765887819) Dec 16 12:23:39.793966 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:23:39.793974 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:23:39.793982 kernel: watchdog: NMI not fully supported Dec 16 12:23:39.793989 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:23:39.793996 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:23:39.794003 kernel: Segment Routing with IPv6 Dec 16 12:23:39.794009 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:23:39.794016 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:23:39.794023 kernel: Key type dns_resolver registered Dec 16 12:23:39.794030 kernel: registered taskstats version 1 Dec 16 12:23:39.794037 kernel: Loading compiled-in X.509 certificates Dec 16 12:23:39.794044 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 16 12:23:39.794052 kernel: Demotion targets for Node 0: null Dec 16 12:23:39.794059 kernel: Key type .fscrypt registered Dec 16 12:23:39.794065 kernel: Key type fscrypt-provisioning registered Dec 16 12:23:39.794072 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:23:39.794079 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:23:39.794086 kernel: ima: No architecture policies found Dec 16 12:23:39.794093 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:23:39.794100 kernel: clk: Disabling unused clocks Dec 16 12:23:39.794107 kernel: PM: genpd: Disabling unused power domains Dec 16 12:23:39.794116 kernel: Warning: unable to open an initial console. Dec 16 12:23:39.794123 kernel: Freeing unused kernel memory: 39552K Dec 16 12:23:39.794129 kernel: Run /init as init process Dec 16 12:23:39.794136 kernel: with arguments: Dec 16 12:23:39.794155 kernel: /init Dec 16 12:23:39.794162 kernel: with environment: Dec 16 12:23:39.794168 kernel: HOME=/ Dec 16 12:23:39.794175 kernel: TERM=linux Dec 16 12:23:39.794183 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:23:39.794197 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:23:39.794205 systemd[1]: Detected virtualization kvm. Dec 16 12:23:39.794212 systemd[1]: Detected architecture arm64. Dec 16 12:23:39.794220 systemd[1]: Running in initrd. Dec 16 12:23:39.794227 systemd[1]: No hostname configured, using default hostname. Dec 16 12:23:39.794235 systemd[1]: Hostname set to . Dec 16 12:23:39.794242 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:23:39.794251 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:23:39.794259 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:23:39.794267 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:23:39.794275 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:23:39.794282 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:23:39.794290 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:23:39.794298 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:23:39.794308 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 12:23:39.794316 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 12:23:39.794323 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:23:39.794331 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:23:39.794338 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:23:39.794346 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:23:39.794353 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:23:39.794361 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:23:39.794369 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:23:39.794383 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:23:39.794394 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:23:39.794402 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:23:39.794409 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:23:39.794416 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:23:39.794424 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:23:39.794432 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:23:39.794439 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:23:39.794448 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:23:39.794456 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:23:39.794464 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:23:39.794479 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:23:39.794487 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:23:39.794495 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:23:39.794502 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:23:39.794510 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:23:39.794520 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:23:39.794528 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:23:39.794555 systemd-journald[246]: Collecting audit messages is disabled. Dec 16 12:23:39.794576 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:23:39.794586 systemd-journald[246]: Journal started Dec 16 12:23:39.794603 systemd-journald[246]: Runtime Journal (/run/log/journal/8aa4f1caded341a28efb447011255b55) is 6M, max 48.5M, 42.4M free. Dec 16 12:23:39.786867 systemd-modules-load[248]: Inserted module 'overlay' Dec 16 12:23:39.797816 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:23:39.800160 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:23:39.801963 systemd-modules-load[248]: Inserted module 'br_netfilter' Dec 16 12:23:39.802883 kernel: Bridge firewalling registered Dec 16 12:23:39.804830 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:23:39.806873 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:23:39.810828 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:23:39.812614 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:23:39.820987 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:23:39.824649 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:23:39.827410 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:23:39.833583 systemd-tmpfiles[269]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:23:39.834387 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:23:39.838366 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:23:39.843279 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:23:39.845444 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:23:39.864290 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:23:39.866800 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:23:39.894043 systemd-resolved[287]: Positive Trust Anchors: Dec 16 12:23:39.894064 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:23:39.894096 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:23:39.899101 systemd-resolved[287]: Defaulting to hostname 'linux'. Dec 16 12:23:39.900321 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:23:39.904432 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:23:39.913171 dracut-cmdline[294]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:23:39.993175 kernel: SCSI subsystem initialized Dec 16 12:23:39.998160 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:23:40.006176 kernel: iscsi: registered transport (tcp) Dec 16 12:23:40.019181 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:23:40.019246 kernel: QLogic iSCSI HBA Driver Dec 16 12:23:40.037545 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:23:40.056337 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:23:40.059039 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:23:40.108221 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:23:40.112288 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:23:40.176216 kernel: raid6: neonx8 gen() 15659 MB/s Dec 16 12:23:40.193173 kernel: raid6: neonx4 gen() 14630 MB/s Dec 16 12:23:40.210203 kernel: raid6: neonx2 gen() 12728 MB/s Dec 16 12:23:40.227191 kernel: raid6: neonx1 gen() 9903 MB/s Dec 16 12:23:40.244179 kernel: raid6: int64x8 gen() 6641 MB/s Dec 16 12:23:40.261190 kernel: raid6: int64x4 gen() 7253 MB/s Dec 16 12:23:40.278171 kernel: raid6: int64x2 gen() 6071 MB/s Dec 16 12:23:40.295214 kernel: raid6: int64x1 gen() 5018 MB/s Dec 16 12:23:40.295264 kernel: raid6: using algorithm neonx8 gen() 15659 MB/s Dec 16 12:23:40.313206 kernel: raid6: .... xor() 11953 MB/s, rmw enabled Dec 16 12:23:40.313276 kernel: raid6: using neon recovery algorithm Dec 16 12:23:40.319172 kernel: xor: measuring software checksum speed Dec 16 12:23:40.319225 kernel: 8regs : 20702 MB/sec Dec 16 12:23:40.320423 kernel: 32regs : 19441 MB/sec Dec 16 12:23:40.320440 kernel: arm64_neon : 27965 MB/sec Dec 16 12:23:40.320449 kernel: xor: using function: arm64_neon (27965 MB/sec) Dec 16 12:23:40.374185 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:23:40.381264 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:23:40.383837 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:23:40.412798 systemd-udevd[502]: Using default interface naming scheme 'v255'. Dec 16 12:23:40.417057 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:23:40.419045 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:23:40.446674 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Dec 16 12:23:40.473976 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:23:40.476439 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:23:40.540228 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:23:40.543135 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:23:40.591013 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 16 12:23:40.591231 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 16 12:23:40.597330 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:23:40.597382 kernel: GPT:9289727 != 19775487 Dec 16 12:23:40.597392 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:23:40.598411 kernel: GPT:9289727 != 19775487 Dec 16 12:23:40.599420 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:23:40.599456 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:23:40.630053 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 12:23:40.638409 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 12:23:40.647105 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:23:40.662782 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 16 12:23:40.664008 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 12:23:40.667327 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:23:40.670642 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:23:40.671849 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:23:40.673947 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:23:40.676786 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:23:40.678685 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:23:40.679773 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:23:40.679851 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:23:40.682865 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:23:40.696879 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:23:40.702658 disk-uuid[590]: Primary Header is updated. Dec 16 12:23:40.702658 disk-uuid[590]: Secondary Entries is updated. Dec 16 12:23:40.702658 disk-uuid[590]: Secondary Header is updated. Dec 16 12:23:40.707178 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:23:40.708597 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:23:40.715184 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:23:40.716809 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:23:41.718168 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:23:41.718880 disk-uuid[596]: The operation has completed successfully. Dec 16 12:23:41.760191 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:23:41.760293 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:23:41.782157 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 12:23:41.808039 sh[614]: Success Dec 16 12:23:41.823068 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:23:41.823172 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:23:41.823189 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:23:41.839541 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:23:41.881324 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:23:41.883512 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 12:23:41.915336 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 12:23:41.921764 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (626) Dec 16 12:23:41.921804 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 16 12:23:41.921814 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:23:41.929167 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:23:41.929227 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:23:41.930175 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 12:23:41.931585 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:23:41.932870 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:23:41.933726 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:23:41.935711 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:23:41.965219 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (656) Dec 16 12:23:41.965275 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:23:41.966284 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:23:41.969182 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:23:41.969260 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:23:41.974206 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:23:41.975903 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:23:41.978229 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:23:42.070702 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:23:42.074574 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:23:42.108565 ignition[708]: Ignition 2.22.0 Dec 16 12:23:42.109517 ignition[708]: Stage: fetch-offline Dec 16 12:23:42.110245 ignition[708]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:23:42.110261 ignition[708]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:23:42.110371 ignition[708]: parsed url from cmdline: "" Dec 16 12:23:42.110375 ignition[708]: no config URL provided Dec 16 12:23:42.110380 ignition[708]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:23:42.110404 ignition[708]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:23:42.110430 ignition[708]: op(1): [started] loading QEMU firmware config module Dec 16 12:23:42.110434 ignition[708]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 12:23:42.121364 ignition[708]: op(1): [finished] loading QEMU firmware config module Dec 16 12:23:42.122068 systemd-networkd[808]: lo: Link UP Dec 16 12:23:42.122071 systemd-networkd[808]: lo: Gained carrier Dec 16 12:23:42.122967 systemd-networkd[808]: Enumeration completed Dec 16 12:23:42.123120 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:23:42.124114 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:23:42.124118 systemd-networkd[808]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:23:42.125398 systemd-networkd[808]: eth0: Link UP Dec 16 12:23:42.125541 systemd-networkd[808]: eth0: Gained carrier Dec 16 12:23:42.125555 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:23:42.126429 systemd[1]: Reached target network.target - Network. Dec 16 12:23:42.158263 systemd-networkd[808]: eth0: DHCPv4 address 10.0.0.37/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:23:42.187793 ignition[708]: parsing config with SHA512: a3f9b3113f2db320464f4f444148c554728f16755984ff2f23bfb6b60855fbc779c5ed8f51c5016eb946f2127046f620a50802c31d5c38a028532a98e35822f5 Dec 16 12:23:42.195338 unknown[708]: fetched base config from "system" Dec 16 12:23:42.195350 unknown[708]: fetched user config from "qemu" Dec 16 12:23:42.195795 ignition[708]: fetch-offline: fetch-offline passed Dec 16 12:23:42.195860 ignition[708]: Ignition finished successfully Dec 16 12:23:42.200101 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:23:42.201673 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 12:23:42.202598 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:23:42.235264 ignition[816]: Ignition 2.22.0 Dec 16 12:23:42.235277 ignition[816]: Stage: kargs Dec 16 12:23:42.235448 ignition[816]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:23:42.235457 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:23:42.239056 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:23:42.236330 ignition[816]: kargs: kargs passed Dec 16 12:23:42.236396 ignition[816]: Ignition finished successfully Dec 16 12:23:42.242631 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:23:42.284457 ignition[824]: Ignition 2.22.0 Dec 16 12:23:42.284478 ignition[824]: Stage: disks Dec 16 12:23:42.284627 ignition[824]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:23:42.287394 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:23:42.284635 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:23:42.289071 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:23:42.285409 ignition[824]: disks: disks passed Dec 16 12:23:42.290813 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:23:42.285468 ignition[824]: Ignition finished successfully Dec 16 12:23:42.294303 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:23:42.296330 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:23:42.297760 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:23:42.301189 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:23:42.337964 systemd-fsck[833]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 12:23:42.343301 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:23:42.346321 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:23:42.420595 kernel: EXT4-fs (vda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 16 12:23:42.421065 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:23:42.422475 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:23:42.425248 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:23:42.427183 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:23:42.428129 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:23:42.428224 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:23:42.428257 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:23:42.441077 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:23:42.445364 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (841) Dec 16 12:23:42.445406 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:23:42.444264 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:23:42.449483 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:23:42.452166 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:23:42.452203 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:23:42.454521 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:23:42.501715 initrd-setup-root[865]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:23:42.505770 initrd-setup-root[872]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:23:42.509676 initrd-setup-root[879]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:23:42.513286 initrd-setup-root[886]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:23:42.599253 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:23:42.601518 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:23:42.605887 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:23:42.621175 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:23:42.636355 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:23:42.653251 ignition[955]: INFO : Ignition 2.22.0 Dec 16 12:23:42.653251 ignition[955]: INFO : Stage: mount Dec 16 12:23:42.654963 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:23:42.654963 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:23:42.654963 ignition[955]: INFO : mount: mount passed Dec 16 12:23:42.654963 ignition[955]: INFO : Ignition finished successfully Dec 16 12:23:42.656837 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:23:42.660666 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:23:42.920234 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:23:42.921816 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:23:42.950158 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (967) Dec 16 12:23:42.950212 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:23:42.950223 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:23:42.954400 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:23:42.954436 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:23:42.956001 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:23:42.989966 ignition[984]: INFO : Ignition 2.22.0 Dec 16 12:23:42.989966 ignition[984]: INFO : Stage: files Dec 16 12:23:42.991698 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:23:42.991698 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:23:42.991698 ignition[984]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:23:42.994935 ignition[984]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:23:42.994935 ignition[984]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:23:42.997630 ignition[984]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:23:42.997630 ignition[984]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:23:42.997630 ignition[984]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:23:42.997394 unknown[984]: wrote ssh authorized keys file for user: core Dec 16 12:23:43.003133 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:23:43.005158 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 12:23:43.036663 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:23:43.118369 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:23:43.118369 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 12:23:43.118369 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 16 12:23:43.249619 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 16 12:23:43.308283 systemd-networkd[808]: eth0: Gained IPv6LL Dec 16 12:23:43.323681 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 12:23:43.323681 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:23:43.327363 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:23:43.327363 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:23:43.327363 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:23:43.327363 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:23:43.327363 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:23:43.327363 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:23:43.327363 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:23:43.348075 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:23:43.350237 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:23:43.350237 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:23:43.359347 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:23:43.359347 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:23:43.364276 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Dec 16 12:23:43.627743 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 16 12:23:43.850570 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:23:43.850570 ignition[984]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 12:23:43.854435 ignition[984]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:23:43.858601 ignition[984]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:23:43.858601 ignition[984]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 12:23:43.858601 ignition[984]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 16 12:23:43.863714 ignition[984]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:23:43.863714 ignition[984]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:23:43.863714 ignition[984]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 16 12:23:43.863714 ignition[984]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 12:23:43.883505 ignition[984]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:23:43.887845 ignition[984]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:23:43.889404 ignition[984]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 12:23:43.889404 ignition[984]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:23:43.889404 ignition[984]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:23:43.889404 ignition[984]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:23:43.889404 ignition[984]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:23:43.889404 ignition[984]: INFO : files: files passed Dec 16 12:23:43.889404 ignition[984]: INFO : Ignition finished successfully Dec 16 12:23:43.891529 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:23:43.898374 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:23:43.901169 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:23:43.918001 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:23:43.918111 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:23:43.922132 initrd-setup-root-after-ignition[1013]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 12:23:43.923743 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:23:43.923743 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:23:43.926912 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:23:43.926087 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:23:43.929534 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:23:43.932561 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:23:43.986316 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:23:43.986464 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:23:43.988864 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:23:43.990728 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:23:43.992758 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:23:43.993713 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:23:44.033346 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:23:44.036608 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:23:44.064070 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:23:44.065951 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:23:44.070757 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:23:44.072587 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:23:44.072728 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:23:44.076589 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:23:44.081244 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:23:44.084338 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:23:44.086868 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:23:44.089977 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:23:44.092614 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:23:44.094615 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:23:44.096693 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:23:44.098643 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:23:44.100570 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:23:44.102318 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:23:44.104018 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:23:44.104169 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:23:44.106626 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:23:44.108603 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:23:44.110779 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:23:44.114247 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:23:44.115540 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:23:44.115674 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:23:44.118913 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:23:44.119050 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:23:44.121082 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:23:44.122626 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:23:44.122780 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:23:44.124660 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:23:44.126059 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:23:44.127930 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:23:44.128029 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:23:44.130181 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:23:44.130266 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:23:44.131904 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:23:44.132038 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:23:44.133717 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:23:44.133823 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:23:44.136288 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:23:44.137853 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:23:44.138022 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:23:44.140854 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:23:44.142937 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:23:44.143082 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:23:44.144881 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:23:44.144998 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:23:44.150345 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:23:44.153298 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:23:44.166532 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:23:44.174869 ignition[1039]: INFO : Ignition 2.22.0 Dec 16 12:23:44.174869 ignition[1039]: INFO : Stage: umount Dec 16 12:23:44.177849 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:23:44.177849 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:23:44.177849 ignition[1039]: INFO : umount: umount passed Dec 16 12:23:44.177849 ignition[1039]: INFO : Ignition finished successfully Dec 16 12:23:44.179433 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:23:44.179580 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:23:44.180708 systemd[1]: Stopped target network.target - Network. Dec 16 12:23:44.184329 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:23:44.184427 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:23:44.185493 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:23:44.185550 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:23:44.187478 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:23:44.187541 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:23:44.189324 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:23:44.189371 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:23:44.191219 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:23:44.192945 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:23:44.197730 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:23:44.197859 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:23:44.201590 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 12:23:44.201849 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:23:44.201893 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:23:44.205597 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:23:44.209882 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:23:44.209995 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:23:44.212900 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 12:23:44.213093 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:23:44.214378 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:23:44.214420 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:23:44.217407 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:23:44.219542 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:23:44.219620 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:23:44.221965 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:23:44.222015 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:23:44.225156 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:23:44.225210 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:23:44.227401 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:23:44.232578 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 12:23:44.240912 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:23:44.241125 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:23:44.245650 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:23:44.245692 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:23:44.247269 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:23:44.247303 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:23:44.249458 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:23:44.249524 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:23:44.254033 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:23:44.254100 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:23:44.258124 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:23:44.258282 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:23:44.261325 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:23:44.262541 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:23:44.262630 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:23:44.265443 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:23:44.265503 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:23:44.270457 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:23:44.270520 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:23:44.273444 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:23:44.273575 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:23:44.275382 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:23:44.275510 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:23:44.278663 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:23:44.278750 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:23:44.281565 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:23:44.283598 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:23:44.283685 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:23:44.286605 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:23:44.307953 systemd[1]: Switching root. Dec 16 12:23:44.343093 systemd-journald[246]: Journal stopped Dec 16 12:23:45.300258 systemd-journald[246]: Received SIGTERM from PID 1 (systemd). Dec 16 12:23:45.300318 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:23:45.300334 kernel: SELinux: policy capability open_perms=1 Dec 16 12:23:45.300346 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:23:45.300367 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:23:45.300377 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:23:45.300386 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:23:45.300397 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:23:45.300407 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:23:45.300416 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:23:45.300428 kernel: audit: type=1403 audit(1765887824.534:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 12:23:45.300439 systemd[1]: Successfully loaded SELinux policy in 59.554ms. Dec 16 12:23:45.300464 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.800ms. Dec 16 12:23:45.300480 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:23:45.300491 systemd[1]: Detected virtualization kvm. Dec 16 12:23:45.300500 systemd[1]: Detected architecture arm64. Dec 16 12:23:45.300510 systemd[1]: Detected first boot. Dec 16 12:23:45.300521 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:23:45.300532 zram_generator::config[1085]: No configuration found. Dec 16 12:23:45.300543 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:23:45.300552 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:23:45.300565 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 12:23:45.300576 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:23:45.300586 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:23:45.300597 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:23:45.300607 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:23:45.300617 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:23:45.300627 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:23:45.300637 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:23:45.300647 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:23:45.300659 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:23:45.300670 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:23:45.300680 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:23:45.300690 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:23:45.300700 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:23:45.300717 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:23:45.300727 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:23:45.300737 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:23:45.300749 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:23:45.300759 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:23:45.300769 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:23:45.300789 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:23:45.300799 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:23:45.300810 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:23:45.300820 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:23:45.300830 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:23:45.300844 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:23:45.300854 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:23:45.300864 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:23:45.300874 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:23:45.300889 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:23:45.300900 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:23:45.300910 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:23:45.300920 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:23:45.300931 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:23:45.300941 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:23:45.300954 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:23:45.300969 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:23:45.300980 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:23:45.300990 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:23:45.301001 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:23:45.301011 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:23:45.301021 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:23:45.301032 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:23:45.301044 systemd[1]: Reached target machines.target - Containers. Dec 16 12:23:45.301055 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:23:45.301067 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:23:45.301078 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:23:45.301088 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:23:45.301099 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:23:45.301110 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:23:45.301121 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:23:45.301132 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:23:45.301246 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:23:45.301259 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:23:45.301272 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:23:45.301288 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:23:45.301298 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:23:45.301309 kernel: fuse: init (API version 7.41) Dec 16 12:23:45.301318 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:23:45.301330 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:23:45.301344 kernel: loop: module loaded Dec 16 12:23:45.301361 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:23:45.301372 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:23:45.301389 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:23:45.301400 kernel: ACPI: bus type drm_connector registered Dec 16 12:23:45.301410 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:23:45.301420 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:23:45.301482 systemd-journald[1160]: Collecting audit messages is disabled. Dec 16 12:23:45.301512 systemd-journald[1160]: Journal started Dec 16 12:23:45.301535 systemd-journald[1160]: Runtime Journal (/run/log/journal/8aa4f1caded341a28efb447011255b55) is 6M, max 48.5M, 42.4M free. Dec 16 12:23:45.043584 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:23:45.064630 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 12:23:45.065093 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:23:45.304408 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:23:45.306186 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 12:23:45.306221 systemd[1]: Stopped verity-setup.service. Dec 16 12:23:45.312311 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:23:45.312997 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:23:45.314420 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:23:45.315920 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:23:45.317218 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:23:45.318522 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:23:45.320004 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:23:45.321516 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:23:45.324183 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:23:45.325951 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:23:45.326137 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:23:45.327780 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:23:45.327993 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:23:45.329651 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:23:45.329836 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:23:45.331348 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:23:45.331530 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:23:45.333409 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:23:45.333616 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:23:45.335091 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:23:45.335327 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:23:45.336920 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:23:45.338562 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:23:45.341626 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:23:45.343347 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:23:45.356596 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:23:45.359365 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:23:45.361813 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:23:45.363165 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:23:45.363206 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:23:45.365313 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:23:45.374072 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:23:45.375373 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:23:45.376796 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:23:45.379206 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:23:45.382580 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:23:45.386356 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:23:45.387843 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:23:45.389059 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:23:45.390043 systemd-journald[1160]: Time spent on flushing to /var/log/journal/8aa4f1caded341a28efb447011255b55 is 11.663ms for 886 entries. Dec 16 12:23:45.390043 systemd-journald[1160]: System Journal (/var/log/journal/8aa4f1caded341a28efb447011255b55) is 8M, max 195.6M, 187.6M free. Dec 16 12:23:45.409592 systemd-journald[1160]: Received client request to flush runtime journal. Dec 16 12:23:45.393290 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:23:45.403494 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:23:45.406269 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:23:45.407892 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:23:45.412428 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:23:45.415131 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:23:45.428911 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:23:45.431159 kernel: loop0: detected capacity change from 0 to 119840 Dec 16 12:23:45.432721 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:23:45.435548 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:23:45.439333 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:23:45.450185 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:23:45.456826 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:23:45.460671 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:23:45.470184 kernel: loop1: detected capacity change from 0 to 100632 Dec 16 12:23:45.481133 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:23:45.493740 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Dec 16 12:23:45.493759 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Dec 16 12:23:45.499214 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:23:45.508196 kernel: loop2: detected capacity change from 0 to 200800 Dec 16 12:23:45.560251 kernel: loop3: detected capacity change from 0 to 119840 Dec 16 12:23:45.571227 kernel: loop4: detected capacity change from 0 to 100632 Dec 16 12:23:45.580186 kernel: loop5: detected capacity change from 0 to 200800 Dec 16 12:23:45.589048 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 16 12:23:45.589624 (sd-merge)[1223]: Merged extensions into '/usr'. Dec 16 12:23:45.593515 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:23:45.593572 systemd[1]: Reloading... Dec 16 12:23:45.663368 zram_generator::config[1249]: No configuration found. Dec 16 12:23:45.758825 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:23:45.831357 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:23:45.831660 systemd[1]: Reloading finished in 237 ms. Dec 16 12:23:45.864231 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:23:45.865689 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:23:45.880639 systemd[1]: Starting ensure-sysext.service... Dec 16 12:23:45.882667 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:23:45.893535 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:23:45.897074 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:23:45.897096 systemd[1]: Reloading... Dec 16 12:23:45.901471 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:23:45.901826 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:23:45.902127 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:23:45.902458 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 12:23:45.903211 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 12:23:45.903512 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Dec 16 12:23:45.903652 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Dec 16 12:23:45.906784 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:23:45.906914 systemd-tmpfiles[1285]: Skipping /boot Dec 16 12:23:45.913063 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:23:45.913234 systemd-tmpfiles[1285]: Skipping /boot Dec 16 12:23:45.942167 zram_generator::config[1312]: No configuration found. Dec 16 12:23:46.110099 systemd[1]: Reloading finished in 212 ms. Dec 16 12:23:46.140269 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:23:46.146542 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:23:46.150588 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:23:46.161267 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:23:46.166350 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:23:46.169341 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:23:46.172326 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:23:46.178670 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:23:46.183500 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:23:46.193107 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:23:46.197627 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:23:46.200337 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:23:46.200524 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:23:46.201979 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:23:46.204060 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:23:46.204252 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:23:46.206303 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:23:46.206508 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:23:46.209012 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:23:46.209217 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:23:46.217980 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Dec 16 12:23:46.221134 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:23:46.223477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:23:46.226729 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:23:46.231802 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:23:46.233568 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:23:46.233765 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:23:46.235463 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:23:46.240494 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:23:46.244257 augenrules[1383]: No rules Dec 16 12:23:46.247365 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:23:46.249618 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:23:46.251649 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:23:46.253274 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:23:46.258194 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:23:46.263082 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:23:46.263562 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:23:46.265850 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:23:46.268187 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:23:46.270896 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:23:46.271110 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:23:46.273817 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:23:46.310050 systemd[1]: Finished ensure-sysext.service. Dec 16 12:23:46.316022 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:23:46.321902 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:23:46.323275 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:23:46.325280 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:23:46.339978 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:23:46.345549 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:23:46.351304 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:23:46.352591 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:23:46.352647 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:23:46.355064 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:23:46.363031 augenrules[1425]: /sbin/augenrules: No change Dec 16 12:23:46.372832 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 12:23:46.376025 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:23:46.376734 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:23:46.376934 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:23:46.378521 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:23:46.378703 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:23:46.383340 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:23:46.384272 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:23:46.385796 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:23:46.387268 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:23:46.393514 augenrules[1451]: No rules Dec 16 12:23:46.395723 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:23:46.397248 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:23:46.406087 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:23:46.406186 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:23:46.411474 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:23:46.416585 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:23:46.420647 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:23:46.448204 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:23:46.493451 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 12:23:46.494580 systemd-networkd[1440]: lo: Link UP Dec 16 12:23:46.494593 systemd-networkd[1440]: lo: Gained carrier Dec 16 12:23:46.495430 systemd-networkd[1440]: Enumeration completed Dec 16 12:23:46.495607 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:23:46.495870 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:23:46.495878 systemd-networkd[1440]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:23:46.496466 systemd-networkd[1440]: eth0: Link UP Dec 16 12:23:46.496589 systemd-networkd[1440]: eth0: Gained carrier Dec 16 12:23:46.496607 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:23:46.496889 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:23:46.501179 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:23:46.504456 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:23:46.517216 systemd-networkd[1440]: eth0: DHCPv4 address 10.0.0.37/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:23:46.517805 systemd-timesyncd[1453]: Network configuration changed, trying to establish connection. Dec 16 12:23:46.952412 systemd-timesyncd[1453]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 12:23:46.952464 systemd-timesyncd[1453]: Initial clock synchronization to Tue 2025-12-16 12:23:46.952292 UTC. Dec 16 12:23:46.956223 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:23:46.964204 systemd-resolved[1350]: Positive Trust Anchors: Dec 16 12:23:46.965860 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:23:46.965937 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:23:46.965973 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:23:46.977398 systemd-resolved[1350]: Defaulting to hostname 'linux'. Dec 16 12:23:46.979233 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:23:46.980482 systemd[1]: Reached target network.target - Network. Dec 16 12:23:46.981356 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:23:47.033384 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:23:47.034884 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:23:47.037269 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:23:47.038535 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:23:47.040161 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:23:47.041372 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:23:47.042667 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:23:47.043908 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:23:47.043949 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:23:47.044902 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:23:47.046720 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:23:47.049447 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:23:47.052898 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:23:47.054568 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:23:47.055942 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:23:47.060082 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:23:47.061564 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:23:47.063430 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:23:47.064681 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:23:47.065660 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:23:47.066648 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:23:47.066681 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:23:47.067873 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:23:47.070162 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:23:47.072354 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:23:47.074610 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:23:47.078791 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:23:47.079881 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:23:47.081834 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:23:47.085692 jq[1505]: false Dec 16 12:23:47.085172 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:23:47.088607 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:23:47.090868 extend-filesystems[1506]: Found /dev/vda6 Dec 16 12:23:47.092233 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:23:47.093547 extend-filesystems[1506]: Found /dev/vda9 Dec 16 12:23:47.096642 extend-filesystems[1506]: Checking size of /dev/vda9 Dec 16 12:23:47.097567 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:23:47.099724 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:23:47.101414 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:23:47.102314 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:23:47.105228 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:23:47.110131 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:23:47.111824 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:23:47.114396 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:23:47.114860 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:23:47.115270 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:23:47.119835 jq[1525]: true Dec 16 12:23:47.123603 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:23:47.123872 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:23:47.127193 extend-filesystems[1506]: Resized partition /dev/vda9 Dec 16 12:23:47.133280 extend-filesystems[1538]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:23:47.145313 update_engine[1524]: I20251216 12:23:47.142148 1524 main.cc:92] Flatcar Update Engine starting Dec 16 12:23:47.145604 jq[1536]: true Dec 16 12:23:47.152219 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 16 12:23:47.163076 tar[1532]: linux-arm64/LICENSE Dec 16 12:23:47.163076 tar[1532]: linux-arm64/helm Dec 16 12:23:47.172513 (ntainerd)[1545]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 12:23:47.183748 dbus-daemon[1503]: [system] SELinux support is enabled Dec 16 12:23:47.183980 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:23:47.191321 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:23:47.191390 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:23:47.192154 update_engine[1524]: I20251216 12:23:47.191885 1524 update_check_scheduler.cc:74] Next update check in 10m51s Dec 16 12:23:47.192947 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:23:47.192972 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:23:47.194930 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:23:47.200053 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 16 12:23:47.201417 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:23:47.216977 systemd-logind[1519]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 12:23:47.217397 extend-filesystems[1538]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 12:23:47.217397 extend-filesystems[1538]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 12:23:47.217397 extend-filesystems[1538]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 16 12:23:47.225083 extend-filesystems[1506]: Resized filesystem in /dev/vda9 Dec 16 12:23:47.218195 systemd-logind[1519]: New seat seat0. Dec 16 12:23:47.218395 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:23:47.218632 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:23:47.223489 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:23:47.238234 bash[1565]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:23:47.236768 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:23:47.240823 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:23:47.258854 locksmithd[1561]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:23:47.345696 containerd[1545]: time="2025-12-16T12:23:47Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:23:47.346311 containerd[1545]: time="2025-12-16T12:23:47.346277342Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 12:23:47.361229 containerd[1545]: time="2025-12-16T12:23:47.361171582Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.56µs" Dec 16 12:23:47.361229 containerd[1545]: time="2025-12-16T12:23:47.361213662Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:23:47.361229 containerd[1545]: time="2025-12-16T12:23:47.361231662Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:23:47.364041 containerd[1545]: time="2025-12-16T12:23:47.361417382Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:23:47.364041 containerd[1545]: time="2025-12-16T12:23:47.361440262Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:23:47.364041 containerd[1545]: time="2025-12-16T12:23:47.361463342Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:23:47.364041 containerd[1545]: time="2025-12-16T12:23:47.361512022Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:23:47.364041 containerd[1545]: time="2025-12-16T12:23:47.361523262Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:23:47.364041 containerd[1545]: time="2025-12-16T12:23:47.361749462Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:23:47.364041 containerd[1545]: time="2025-12-16T12:23:47.361763182Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:23:47.364041 containerd[1545]: time="2025-12-16T12:23:47.361773342Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:23:47.364041 containerd[1545]: time="2025-12-16T12:23:47.361780982Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:23:47.364041 containerd[1545]: time="2025-12-16T12:23:47.361847382Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:23:47.364041 containerd[1545]: time="2025-12-16T12:23:47.362059262Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:23:47.364270 containerd[1545]: time="2025-12-16T12:23:47.362089902Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:23:47.364270 containerd[1545]: time="2025-12-16T12:23:47.362100182Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:23:47.364270 containerd[1545]: time="2025-12-16T12:23:47.362130062Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:23:47.364322 containerd[1545]: time="2025-12-16T12:23:47.364254702Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:23:47.364399 containerd[1545]: time="2025-12-16T12:23:47.364374062Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:23:47.370160 containerd[1545]: time="2025-12-16T12:23:47.370115382Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:23:47.370235 containerd[1545]: time="2025-12-16T12:23:47.370183502Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:23:47.370235 containerd[1545]: time="2025-12-16T12:23:47.370199942Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:23:47.370235 containerd[1545]: time="2025-12-16T12:23:47.370212342Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:23:47.370235 containerd[1545]: time="2025-12-16T12:23:47.370224022Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:23:47.370235 containerd[1545]: time="2025-12-16T12:23:47.370234182Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:23:47.370353 containerd[1545]: time="2025-12-16T12:23:47.370247422Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:23:47.370353 containerd[1545]: time="2025-12-16T12:23:47.370259942Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:23:47.370353 containerd[1545]: time="2025-12-16T12:23:47.370271702Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:23:47.370353 containerd[1545]: time="2025-12-16T12:23:47.370282542Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:23:47.370353 containerd[1545]: time="2025-12-16T12:23:47.370291622Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:23:47.370353 containerd[1545]: time="2025-12-16T12:23:47.370303022Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:23:47.370446 containerd[1545]: time="2025-12-16T12:23:47.370434582Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:23:47.370464 containerd[1545]: time="2025-12-16T12:23:47.370457942Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:23:47.370499 containerd[1545]: time="2025-12-16T12:23:47.370477382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:23:47.370522 containerd[1545]: time="2025-12-16T12:23:47.370498902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:23:47.370522 containerd[1545]: time="2025-12-16T12:23:47.370510502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:23:47.370559 containerd[1545]: time="2025-12-16T12:23:47.370521262Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:23:47.370559 containerd[1545]: time="2025-12-16T12:23:47.370534102Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:23:47.370559 containerd[1545]: time="2025-12-16T12:23:47.370544782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:23:47.370559 containerd[1545]: time="2025-12-16T12:23:47.370557222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:23:47.370635 containerd[1545]: time="2025-12-16T12:23:47.370575142Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:23:47.370635 containerd[1545]: time="2025-12-16T12:23:47.370601062Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:23:47.370785 containerd[1545]: time="2025-12-16T12:23:47.370767462Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:23:47.370811 containerd[1545]: time="2025-12-16T12:23:47.370785742Z" level=info msg="Start snapshots syncer" Dec 16 12:23:47.370829 containerd[1545]: time="2025-12-16T12:23:47.370812862Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:23:47.371100 containerd[1545]: time="2025-12-16T12:23:47.371061542Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:23:47.371191 containerd[1545]: time="2025-12-16T12:23:47.371112982Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:23:47.371191 containerd[1545]: time="2025-12-16T12:23:47.371160222Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:23:47.371279 containerd[1545]: time="2025-12-16T12:23:47.371259302Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:23:47.371312 containerd[1545]: time="2025-12-16T12:23:47.371289062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:23:47.371312 containerd[1545]: time="2025-12-16T12:23:47.371309422Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:23:47.371372 containerd[1545]: time="2025-12-16T12:23:47.371320022Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:23:47.371372 containerd[1545]: time="2025-12-16T12:23:47.371341342Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:23:47.371372 containerd[1545]: time="2025-12-16T12:23:47.371352022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:23:47.371372 containerd[1545]: time="2025-12-16T12:23:47.371362942Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:23:47.371435 containerd[1545]: time="2025-12-16T12:23:47.371385342Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:23:47.371435 containerd[1545]: time="2025-12-16T12:23:47.371396542Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:23:47.371435 containerd[1545]: time="2025-12-16T12:23:47.371407502Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:23:47.371486 containerd[1545]: time="2025-12-16T12:23:47.371441022Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:23:47.371486 containerd[1545]: time="2025-12-16T12:23:47.371455662Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:23:47.371486 containerd[1545]: time="2025-12-16T12:23:47.371463902Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:23:47.371486 containerd[1545]: time="2025-12-16T12:23:47.371473382Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:23:47.371486 containerd[1545]: time="2025-12-16T12:23:47.371481342Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:23:47.371571 containerd[1545]: time="2025-12-16T12:23:47.371491662Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:23:47.371571 containerd[1545]: time="2025-12-16T12:23:47.371502582Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:23:47.371606 containerd[1545]: time="2025-12-16T12:23:47.371579582Z" level=info msg="runtime interface created" Dec 16 12:23:47.371606 containerd[1545]: time="2025-12-16T12:23:47.371585142Z" level=info msg="created NRI interface" Dec 16 12:23:47.371606 containerd[1545]: time="2025-12-16T12:23:47.371593822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:23:47.371606 containerd[1545]: time="2025-12-16T12:23:47.371605222Z" level=info msg="Connect containerd service" Dec 16 12:23:47.371671 containerd[1545]: time="2025-12-16T12:23:47.371627222Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:23:47.375477 containerd[1545]: time="2025-12-16T12:23:47.375439742Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:23:47.450701 containerd[1545]: time="2025-12-16T12:23:47.450628342Z" level=info msg="Start subscribing containerd event" Dec 16 12:23:47.450701 containerd[1545]: time="2025-12-16T12:23:47.450707382Z" level=info msg="Start recovering state" Dec 16 12:23:47.450813 containerd[1545]: time="2025-12-16T12:23:47.450803782Z" level=info msg="Start event monitor" Dec 16 12:23:47.450832 containerd[1545]: time="2025-12-16T12:23:47.450818782Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:23:47.450832 containerd[1545]: time="2025-12-16T12:23:47.450826942Z" level=info msg="Start streaming server" Dec 16 12:23:47.450866 containerd[1545]: time="2025-12-16T12:23:47.450835982Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:23:47.450866 containerd[1545]: time="2025-12-16T12:23:47.450843622Z" level=info msg="runtime interface starting up..." Dec 16 12:23:47.450866 containerd[1545]: time="2025-12-16T12:23:47.450848702Z" level=info msg="starting plugins..." Dec 16 12:23:47.450866 containerd[1545]: time="2025-12-16T12:23:47.450861302Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:23:47.451286 containerd[1545]: time="2025-12-16T12:23:47.451262222Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:23:47.451320 containerd[1545]: time="2025-12-16T12:23:47.451310102Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:23:47.452586 containerd[1545]: time="2025-12-16T12:23:47.451371702Z" level=info msg="containerd successfully booted in 0.107428s" Dec 16 12:23:47.451480 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:23:47.492189 tar[1532]: linux-arm64/README.md Dec 16 12:23:47.511555 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:23:47.895125 sshd_keygen[1531]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:23:47.916868 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:23:47.920909 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:23:47.949656 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:23:47.949922 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:23:47.953780 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:23:47.983154 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:23:47.987240 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:23:47.989871 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:23:47.991528 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:23:48.350207 systemd-networkd[1440]: eth0: Gained IPv6LL Dec 16 12:23:48.353015 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:23:48.357768 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:23:48.361815 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 12:23:48.365843 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:23:48.370309 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:23:48.411065 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 12:23:48.411364 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 12:23:48.413160 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:23:48.415695 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:23:49.005187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:23:49.006814 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:23:49.013137 systemd[1]: Startup finished in 2.103s (kernel) + 4.917s (initrd) + 4.104s (userspace) = 11.126s. Dec 16 12:23:49.023337 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:23:49.388105 kubelet[1638]: E1216 12:23:49.387962 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:23:49.390497 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:23:49.390647 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:23:49.392188 systemd[1]: kubelet.service: Consumed 721ms CPU time, 249M memory peak. Dec 16 12:23:53.229369 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:23:53.230667 systemd[1]: Started sshd@0-10.0.0.37:22-10.0.0.1:35234.service - OpenSSH per-connection server daemon (10.0.0.1:35234). Dec 16 12:23:53.334140 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 35234 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:23:53.336644 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:23:53.343992 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:23:53.345369 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:23:53.353425 systemd-logind[1519]: New session 1 of user core. Dec 16 12:23:53.380322 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:23:53.383467 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:23:53.407289 (systemd)[1657]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:23:53.411174 systemd-logind[1519]: New session c1 of user core. Dec 16 12:23:53.553756 systemd[1657]: Queued start job for default target default.target. Dec 16 12:23:53.577309 systemd[1657]: Created slice app.slice - User Application Slice. Dec 16 12:23:53.577356 systemd[1657]: Reached target paths.target - Paths. Dec 16 12:23:53.577416 systemd[1657]: Reached target timers.target - Timers. Dec 16 12:23:53.579269 systemd[1657]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:23:53.592519 systemd[1657]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:23:53.592646 systemd[1657]: Reached target sockets.target - Sockets. Dec 16 12:23:53.592697 systemd[1657]: Reached target basic.target - Basic System. Dec 16 12:23:53.592724 systemd[1657]: Reached target default.target - Main User Target. Dec 16 12:23:53.592749 systemd[1657]: Startup finished in 171ms. Dec 16 12:23:53.593011 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:23:53.598502 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:23:53.671459 systemd[1]: Started sshd@1-10.0.0.37:22-10.0.0.1:35250.service - OpenSSH per-connection server daemon (10.0.0.1:35250). Dec 16 12:23:53.760719 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 35250 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:23:53.761954 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:23:53.768919 systemd-logind[1519]: New session 2 of user core. Dec 16 12:23:53.783269 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:23:53.838169 sshd[1671]: Connection closed by 10.0.0.1 port 35250 Dec 16 12:23:53.839567 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Dec 16 12:23:53.849639 systemd[1]: sshd@1-10.0.0.37:22-10.0.0.1:35250.service: Deactivated successfully. Dec 16 12:23:53.851681 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 12:23:53.852673 systemd-logind[1519]: Session 2 logged out. Waiting for processes to exit. Dec 16 12:23:53.855572 systemd[1]: Started sshd@2-10.0.0.37:22-10.0.0.1:35254.service - OpenSSH per-connection server daemon (10.0.0.1:35254). Dec 16 12:23:53.856972 systemd-logind[1519]: Removed session 2. Dec 16 12:23:53.923399 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 35254 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:23:53.924941 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:23:53.931745 systemd-logind[1519]: New session 3 of user core. Dec 16 12:23:53.950286 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:23:54.000121 sshd[1680]: Connection closed by 10.0.0.1 port 35254 Dec 16 12:23:54.000868 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Dec 16 12:23:54.013096 systemd[1]: sshd@2-10.0.0.37:22-10.0.0.1:35254.service: Deactivated successfully. Dec 16 12:23:54.018252 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:23:54.021794 systemd-logind[1519]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:23:54.024065 systemd[1]: Started sshd@3-10.0.0.37:22-10.0.0.1:35268.service - OpenSSH per-connection server daemon (10.0.0.1:35268). Dec 16 12:23:54.025219 systemd-logind[1519]: Removed session 3. Dec 16 12:23:54.102760 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 35268 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:23:54.104268 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:23:54.111085 systemd-logind[1519]: New session 4 of user core. Dec 16 12:23:54.119388 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:23:54.175355 sshd[1689]: Connection closed by 10.0.0.1 port 35268 Dec 16 12:23:54.175943 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Dec 16 12:23:54.191113 systemd[1]: sshd@3-10.0.0.37:22-10.0.0.1:35268.service: Deactivated successfully. Dec 16 12:23:54.193121 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:23:54.193992 systemd-logind[1519]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:23:54.197398 systemd[1]: Started sshd@4-10.0.0.37:22-10.0.0.1:35280.service - OpenSSH per-connection server daemon (10.0.0.1:35280). Dec 16 12:23:54.198120 systemd-logind[1519]: Removed session 4. Dec 16 12:23:54.266672 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 35280 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:23:54.268597 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:23:54.274184 systemd-logind[1519]: New session 5 of user core. Dec 16 12:23:54.291304 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:23:54.357705 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:23:54.358061 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:23:54.385454 sudo[1699]: pam_unix(sudo:session): session closed for user root Dec 16 12:23:54.388543 sshd[1698]: Connection closed by 10.0.0.1 port 35280 Dec 16 12:23:54.388986 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Dec 16 12:23:54.401140 systemd[1]: sshd@4-10.0.0.37:22-10.0.0.1:35280.service: Deactivated successfully. Dec 16 12:23:54.403389 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:23:54.404790 systemd-logind[1519]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:23:54.408634 systemd[1]: Started sshd@5-10.0.0.37:22-10.0.0.1:35294.service - OpenSSH per-connection server daemon (10.0.0.1:35294). Dec 16 12:23:54.409657 systemd-logind[1519]: Removed session 5. Dec 16 12:23:54.470575 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 35294 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:23:54.472371 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:23:54.477603 systemd-logind[1519]: New session 6 of user core. Dec 16 12:23:54.487283 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:23:54.542388 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:23:54.543077 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:23:54.624700 sudo[1710]: pam_unix(sudo:session): session closed for user root Dec 16 12:23:54.631141 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:23:54.631480 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:23:54.645896 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:23:54.690062 augenrules[1732]: No rules Dec 16 12:23:54.691288 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:23:54.691543 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:23:54.693689 sudo[1709]: pam_unix(sudo:session): session closed for user root Dec 16 12:23:54.697386 sshd[1708]: Connection closed by 10.0.0.1 port 35294 Dec 16 12:23:54.697551 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Dec 16 12:23:54.708943 systemd[1]: sshd@5-10.0.0.37:22-10.0.0.1:35294.service: Deactivated successfully. Dec 16 12:23:54.711701 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:23:54.713741 systemd-logind[1519]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:23:54.716822 systemd[1]: Started sshd@6-10.0.0.37:22-10.0.0.1:35302.service - OpenSSH per-connection server daemon (10.0.0.1:35302). Dec 16 12:23:54.717738 systemd-logind[1519]: Removed session 6. Dec 16 12:23:54.783074 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 35302 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:23:54.784818 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:23:54.791114 systemd-logind[1519]: New session 7 of user core. Dec 16 12:23:54.800318 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:23:54.854352 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:23:54.854653 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:23:55.175932 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:23:55.189431 (dockerd)[1765]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:23:55.422061 dockerd[1765]: time="2025-12-16T12:23:55.421641622Z" level=info msg="Starting up" Dec 16 12:23:55.423166 dockerd[1765]: time="2025-12-16T12:23:55.423129302Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:23:55.438167 dockerd[1765]: time="2025-12-16T12:23:55.438014702Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:23:55.457994 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport220061312-merged.mount: Deactivated successfully. Dec 16 12:23:55.489498 dockerd[1765]: time="2025-12-16T12:23:55.489404462Z" level=info msg="Loading containers: start." Dec 16 12:23:55.499072 kernel: Initializing XFRM netlink socket Dec 16 12:23:55.742398 systemd-networkd[1440]: docker0: Link UP Dec 16 12:23:55.747871 dockerd[1765]: time="2025-12-16T12:23:55.747795622Z" level=info msg="Loading containers: done." Dec 16 12:23:55.763776 dockerd[1765]: time="2025-12-16T12:23:55.763709782Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:23:55.763947 dockerd[1765]: time="2025-12-16T12:23:55.763821302Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:23:55.763947 dockerd[1765]: time="2025-12-16T12:23:55.763923702Z" level=info msg="Initializing buildkit" Dec 16 12:23:55.792009 dockerd[1765]: time="2025-12-16T12:23:55.791944022Z" level=info msg="Completed buildkit initialization" Dec 16 12:23:55.799826 dockerd[1765]: time="2025-12-16T12:23:55.799765342Z" level=info msg="Daemon has completed initialization" Dec 16 12:23:55.800115 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:23:55.800529 dockerd[1765]: time="2025-12-16T12:23:55.799862182Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:23:56.253575 containerd[1545]: time="2025-12-16T12:23:56.253048022Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 12:23:56.857638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3148612622.mount: Deactivated successfully. Dec 16 12:23:57.794011 containerd[1545]: time="2025-12-16T12:23:57.793056982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:23:57.794011 containerd[1545]: time="2025-12-16T12:23:57.793970142Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571042" Dec 16 12:23:57.794837 containerd[1545]: time="2025-12-16T12:23:57.794803222Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:23:57.799088 containerd[1545]: time="2025-12-16T12:23:57.798740022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:23:57.799819 containerd[1545]: time="2025-12-16T12:23:57.799773302Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 1.54667452s" Dec 16 12:23:57.799819 containerd[1545]: time="2025-12-16T12:23:57.799815862Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Dec 16 12:23:57.800578 containerd[1545]: time="2025-12-16T12:23:57.800549062Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 12:23:58.959362 containerd[1545]: time="2025-12-16T12:23:58.959314822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:23:58.960038 containerd[1545]: time="2025-12-16T12:23:58.959861142Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135479" Dec 16 12:23:58.961142 containerd[1545]: time="2025-12-16T12:23:58.961088542Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:23:58.963996 containerd[1545]: time="2025-12-16T12:23:58.963942262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:23:58.965586 containerd[1545]: time="2025-12-16T12:23:58.965096782Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.16449908s" Dec 16 12:23:58.965586 containerd[1545]: time="2025-12-16T12:23:58.965133262Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Dec 16 12:23:58.965970 containerd[1545]: time="2025-12-16T12:23:58.965718422Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 12:23:59.442934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:23:59.444646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:23:59.647502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:23:59.652255 (kubelet)[2053]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:23:59.700070 kubelet[2053]: E1216 12:23:59.699516 2053 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:23:59.703491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:23:59.703941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:23:59.706240 systemd[1]: kubelet.service: Consumed 173ms CPU time, 106M memory peak. Dec 16 12:24:00.796757 containerd[1545]: time="2025-12-16T12:24:00.796690582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:00.799174 containerd[1545]: time="2025-12-16T12:24:00.799129342Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191718" Dec 16 12:24:00.800644 containerd[1545]: time="2025-12-16T12:24:00.800562902Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:00.804106 containerd[1545]: time="2025-12-16T12:24:00.804052262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:00.805423 containerd[1545]: time="2025-12-16T12:24:00.805142102Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.83938928s" Dec 16 12:24:00.805423 containerd[1545]: time="2025-12-16T12:24:00.805237862Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Dec 16 12:24:00.805806 containerd[1545]: time="2025-12-16T12:24:00.805744142Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 12:24:01.968757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895587056.mount: Deactivated successfully. Dec 16 12:24:02.167695 containerd[1545]: time="2025-12-16T12:24:02.167627862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:02.168168 containerd[1545]: time="2025-12-16T12:24:02.168118942Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805255" Dec 16 12:24:02.169190 containerd[1545]: time="2025-12-16T12:24:02.169133062Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:02.171149 containerd[1545]: time="2025-12-16T12:24:02.171104902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:02.171750 containerd[1545]: time="2025-12-16T12:24:02.171721382Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.36594704s" Dec 16 12:24:02.171806 containerd[1545]: time="2025-12-16T12:24:02.171757662Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Dec 16 12:24:02.172273 containerd[1545]: time="2025-12-16T12:24:02.172242622Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 12:24:02.710321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1791727459.mount: Deactivated successfully. Dec 16 12:24:03.437894 containerd[1545]: time="2025-12-16T12:24:03.437831262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:03.439047 containerd[1545]: time="2025-12-16T12:24:03.438597302Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Dec 16 12:24:03.439647 containerd[1545]: time="2025-12-16T12:24:03.439612422Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:03.443146 containerd[1545]: time="2025-12-16T12:24:03.443109622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:03.444510 containerd[1545]: time="2025-12-16T12:24:03.444472462Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.27218848s" Dec 16 12:24:03.444510 containerd[1545]: time="2025-12-16T12:24:03.444507542Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Dec 16 12:24:03.444986 containerd[1545]: time="2025-12-16T12:24:03.444959702Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 12:24:03.912151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603117931.mount: Deactivated successfully. Dec 16 12:24:03.918653 containerd[1545]: time="2025-12-16T12:24:03.918137262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:03.919329 containerd[1545]: time="2025-12-16T12:24:03.919297702Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Dec 16 12:24:03.920358 containerd[1545]: time="2025-12-16T12:24:03.920326702Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:03.922997 containerd[1545]: time="2025-12-16T12:24:03.922964502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:03.923536 containerd[1545]: time="2025-12-16T12:24:03.923502342Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 478.47512ms" Dec 16 12:24:03.923616 containerd[1545]: time="2025-12-16T12:24:03.923538182Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Dec 16 12:24:03.924267 containerd[1545]: time="2025-12-16T12:24:03.924053422Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 12:24:04.495404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2354443279.mount: Deactivated successfully. Dec 16 12:24:07.031134 containerd[1545]: time="2025-12-16T12:24:07.031054822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:07.031891 containerd[1545]: time="2025-12-16T12:24:07.031845582Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062989" Dec 16 12:24:07.032991 containerd[1545]: time="2025-12-16T12:24:07.032922582Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:07.036996 containerd[1545]: time="2025-12-16T12:24:07.036899262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:07.038189 containerd[1545]: time="2025-12-16T12:24:07.038141862Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.11405796s" Dec 16 12:24:07.038497 containerd[1545]: time="2025-12-16T12:24:07.038322382Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Dec 16 12:24:09.942947 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:24:09.947191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:24:10.128343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:24:10.134317 (kubelet)[2215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:24:10.167939 kubelet[2215]: E1216 12:24:10.167885 2215 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:24:10.170643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:24:10.170904 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:24:10.171357 systemd[1]: kubelet.service: Consumed 143ms CPU time, 107.1M memory peak. Dec 16 12:24:11.547002 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:24:11.547185 systemd[1]: kubelet.service: Consumed 143ms CPU time, 107.1M memory peak. Dec 16 12:24:11.549403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:24:11.588312 systemd[1]: Reload requested from client PID 2230 ('systemctl') (unit session-7.scope)... Dec 16 12:24:11.588334 systemd[1]: Reloading... Dec 16 12:24:11.671394 zram_generator::config[2270]: No configuration found. Dec 16 12:24:12.083375 systemd[1]: Reloading finished in 494 ms. Dec 16 12:24:12.137736 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:24:12.137837 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:24:12.138129 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:24:12.138183 systemd[1]: kubelet.service: Consumed 99ms CPU time, 95.1M memory peak. Dec 16 12:24:12.139928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:24:12.275752 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:24:12.280272 (kubelet)[2318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:24:12.317147 kubelet[2318]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:24:12.317147 kubelet[2318]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:24:12.317497 kubelet[2318]: I1216 12:24:12.317233 2318 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:24:13.054540 kubelet[2318]: I1216 12:24:13.054484 2318 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 12:24:13.054540 kubelet[2318]: I1216 12:24:13.054519 2318 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:24:13.054540 kubelet[2318]: I1216 12:24:13.054548 2318 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 12:24:13.054540 kubelet[2318]: I1216 12:24:13.054554 2318 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:24:13.054807 kubelet[2318]: I1216 12:24:13.054791 2318 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:24:13.189263 kubelet[2318]: E1216 12:24:13.188724 2318 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:24:13.190677 kubelet[2318]: I1216 12:24:13.190488 2318 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:24:13.194755 kubelet[2318]: I1216 12:24:13.194707 2318 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:24:13.197698 kubelet[2318]: I1216 12:24:13.197669 2318 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 12:24:13.198138 kubelet[2318]: I1216 12:24:13.198103 2318 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:24:13.198389 kubelet[2318]: I1216 12:24:13.198206 2318 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:24:13.198556 kubelet[2318]: I1216 12:24:13.198540 2318 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:24:13.198615 kubelet[2318]: I1216 12:24:13.198607 2318 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 12:24:13.198798 kubelet[2318]: I1216 12:24:13.198783 2318 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 12:24:13.202621 kubelet[2318]: I1216 12:24:13.202590 2318 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:24:13.204115 kubelet[2318]: I1216 12:24:13.204069 2318 kubelet.go:475] "Attempting to sync node with API server" Dec 16 12:24:13.205565 kubelet[2318]: I1216 12:24:13.204099 2318 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:24:13.205565 kubelet[2318]: I1216 12:24:13.204705 2318 kubelet.go:387] "Adding apiserver pod source" Dec 16 12:24:13.205565 kubelet[2318]: I1216 12:24:13.204717 2318 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:24:13.205565 kubelet[2318]: E1216 12:24:13.204829 2318 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:24:13.205565 kubelet[2318]: E1216 12:24:13.205362 2318 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:24:13.206467 kubelet[2318]: I1216 12:24:13.206442 2318 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:24:13.207132 kubelet[2318]: I1216 12:24:13.207107 2318 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:24:13.207188 kubelet[2318]: I1216 12:24:13.207141 2318 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 12:24:13.207188 kubelet[2318]: W1216 12:24:13.207182 2318 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:24:13.210359 kubelet[2318]: I1216 12:24:13.210333 2318 server.go:1262] "Started kubelet" Dec 16 12:24:13.210909 kubelet[2318]: I1216 12:24:13.210857 2318 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:24:13.210967 kubelet[2318]: I1216 12:24:13.210925 2318 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 12:24:13.211399 kubelet[2318]: I1216 12:24:13.211374 2318 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:24:13.212034 kubelet[2318]: I1216 12:24:13.211568 2318 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:24:13.212034 kubelet[2318]: I1216 12:24:13.211696 2318 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:24:13.213523 kubelet[2318]: I1216 12:24:13.213460 2318 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:24:13.216408 kubelet[2318]: E1216 12:24:13.216379 2318 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:24:13.216500 kubelet[2318]: I1216 12:24:13.216479 2318 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 12:24:13.217148 kubelet[2318]: I1216 12:24:13.216926 2318 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 12:24:13.218048 kubelet[2318]: E1216 12:24:13.215729 2318 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.37:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.37:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b1a57bc6cffe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 12:24:13.210292222 +0000 UTC m=+0.926954001,LastTimestamp:2025-12-16 12:24:13.210292222 +0000 UTC m=+0.926954001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 12:24:13.218048 kubelet[2318]: E1216 12:24:13.217821 2318 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:24:13.218231 kubelet[2318]: I1216 12:24:13.218195 2318 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:24:13.218329 kubelet[2318]: I1216 12:24:13.218309 2318 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:24:13.219108 kubelet[2318]: E1216 12:24:13.218977 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="200ms" Dec 16 12:24:13.219804 kubelet[2318]: I1216 12:24:13.219777 2318 server.go:310] "Adding debug handlers to kubelet server" Dec 16 12:24:13.220037 kubelet[2318]: E1216 12:24:13.219988 2318 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:24:13.220756 kubelet[2318]: I1216 12:24:13.220580 2318 reconciler.go:29] "Reconciler: start to sync state" Dec 16 12:24:13.220756 kubelet[2318]: I1216 12:24:13.220672 2318 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:24:13.230005 kubelet[2318]: I1216 12:24:13.229953 2318 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:24:13.230005 kubelet[2318]: I1216 12:24:13.229977 2318 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:24:13.230005 kubelet[2318]: I1216 12:24:13.229997 2318 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:24:13.234697 kubelet[2318]: I1216 12:24:13.234648 2318 policy_none.go:49] "None policy: Start" Dec 16 12:24:13.234697 kubelet[2318]: I1216 12:24:13.234684 2318 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 12:24:13.234697 kubelet[2318]: I1216 12:24:13.234700 2318 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 12:24:13.236737 kubelet[2318]: I1216 12:24:13.236705 2318 policy_none.go:47] "Start" Dec 16 12:24:13.238863 kubelet[2318]: I1216 12:24:13.238827 2318 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 12:24:13.241911 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:24:13.242840 kubelet[2318]: I1216 12:24:13.242813 2318 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 12:24:13.243018 kubelet[2318]: I1216 12:24:13.242999 2318 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 12:24:13.243122 kubelet[2318]: I1216 12:24:13.243110 2318 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 12:24:13.243259 kubelet[2318]: E1216 12:24:13.243229 2318 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:24:13.244064 kubelet[2318]: E1216 12:24:13.244008 2318 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:24:13.255483 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:24:13.259111 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:24:13.279999 kubelet[2318]: E1216 12:24:13.279966 2318 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:24:13.280242 kubelet[2318]: I1216 12:24:13.280217 2318 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:24:13.280290 kubelet[2318]: I1216 12:24:13.280240 2318 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:24:13.280488 kubelet[2318]: I1216 12:24:13.280468 2318 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:24:13.281760 kubelet[2318]: E1216 12:24:13.281447 2318 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:24:13.281760 kubelet[2318]: E1216 12:24:13.281493 2318 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 12:24:13.383668 kubelet[2318]: I1216 12:24:13.383555 2318 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:24:13.385786 kubelet[2318]: E1216 12:24:13.385541 2318 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Dec 16 12:24:13.397368 systemd[1]: Created slice kubepods-burstable-pod3629a053d364a6752f05ea09a7b7eb7e.slice - libcontainer container kubepods-burstable-pod3629a053d364a6752f05ea09a7b7eb7e.slice. Dec 16 12:24:13.414973 kubelet[2318]: E1216 12:24:13.414899 2318 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:24:13.419356 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Dec 16 12:24:13.420865 kubelet[2318]: E1216 12:24:13.420829 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="400ms" Dec 16 12:24:13.421595 kubelet[2318]: E1216 12:24:13.421570 2318 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:24:13.421850 kubelet[2318]: I1216 12:24:13.421832 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3629a053d364a6752f05ea09a7b7eb7e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3629a053d364a6752f05ea09a7b7eb7e\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:13.421973 kubelet[2318]: I1216 12:24:13.421859 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3629a053d364a6752f05ea09a7b7eb7e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3629a053d364a6752f05ea09a7b7eb7e\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:13.421973 kubelet[2318]: I1216 12:24:13.421899 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:13.421973 kubelet[2318]: I1216 12:24:13.421921 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:13.421973 kubelet[2318]: I1216 12:24:13.421937 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:13.421973 kubelet[2318]: I1216 12:24:13.421963 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3629a053d364a6752f05ea09a7b7eb7e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3629a053d364a6752f05ea09a7b7eb7e\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:13.422120 kubelet[2318]: I1216 12:24:13.421980 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:13.422120 kubelet[2318]: I1216 12:24:13.421996 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:13.422120 kubelet[2318]: I1216 12:24:13.422042 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:24:13.424267 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Dec 16 12:24:13.426099 kubelet[2318]: E1216 12:24:13.426070 2318 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:24:13.588078 kubelet[2318]: I1216 12:24:13.587950 2318 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:24:13.588433 kubelet[2318]: E1216 12:24:13.588409 2318 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Dec 16 12:24:13.718583 containerd[1545]: time="2025-12-16T12:24:13.718520542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3629a053d364a6752f05ea09a7b7eb7e,Namespace:kube-system,Attempt:0,}" Dec 16 12:24:13.725341 containerd[1545]: time="2025-12-16T12:24:13.725002382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Dec 16 12:24:13.730271 containerd[1545]: time="2025-12-16T12:24:13.730231382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Dec 16 12:24:13.822315 kubelet[2318]: E1216 12:24:13.822261 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="800ms" Dec 16 12:24:13.990317 kubelet[2318]: I1216 12:24:13.990185 2318 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:24:13.990618 kubelet[2318]: E1216 12:24:13.990589 2318 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Dec 16 12:24:14.401781 kubelet[2318]: E1216 12:24:14.401653 2318 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:24:14.463726 kubelet[2318]: E1216 12:24:14.463640 2318 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:24:14.508327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount612047844.mount: Deactivated successfully. Dec 16 12:24:14.542433 containerd[1545]: time="2025-12-16T12:24:14.542270742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:24:14.553491 containerd[1545]: time="2025-12-16T12:24:14.553437702Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 16 12:24:14.561680 containerd[1545]: time="2025-12-16T12:24:14.561585502Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:24:14.566442 containerd[1545]: time="2025-12-16T12:24:14.566059022Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:24:14.572833 containerd[1545]: time="2025-12-16T12:24:14.572789142Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:24:14.577635 containerd[1545]: time="2025-12-16T12:24:14.577589342Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:24:14.582070 containerd[1545]: time="2025-12-16T12:24:14.581444822Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:24:14.588080 containerd[1545]: time="2025-12-16T12:24:14.587970662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:24:14.588966 containerd[1545]: time="2025-12-16T12:24:14.588940942Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 857.44216ms" Dec 16 12:24:14.589811 containerd[1545]: time="2025-12-16T12:24:14.589523582Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 869.21668ms" Dec 16 12:24:14.591063 containerd[1545]: time="2025-12-16T12:24:14.590992302Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 864.72748ms" Dec 16 12:24:14.623538 kubelet[2318]: E1216 12:24:14.623474 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="1.6s" Dec 16 12:24:14.707769 containerd[1545]: time="2025-12-16T12:24:14.706387942Z" level=info msg="connecting to shim c7238776faa3d772f604ab96cbc090bb025dc174d28798b86b111ab7cd8a3e77" address="unix:///run/containerd/s/99bdb0bca6902f2a8630b06ed6a35fa1a095574681c4afb10c801e30ffc0be7a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:24:14.717261 containerd[1545]: time="2025-12-16T12:24:14.717052662Z" level=info msg="connecting to shim 148f33b42d112675a22889804fb9417808577d8c88b559f890116739670d594e" address="unix:///run/containerd/s/757aedf430dbe36fd8b74b93d9071a0a524de9ad6fcea530c25088c919b2a66a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:24:14.734081 containerd[1545]: time="2025-12-16T12:24:14.733998262Z" level=info msg="connecting to shim 1433ac656134d0799e619e420f9071583a8935a1b59d59425682b9057ab86cd4" address="unix:///run/containerd/s/11efa3528001ef8fdf62ecb144322c2ee01a0cdaf1bc6f4d6ee52526a2b43c01" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:24:14.738307 systemd[1]: Started cri-containerd-c7238776faa3d772f604ab96cbc090bb025dc174d28798b86b111ab7cd8a3e77.scope - libcontainer container c7238776faa3d772f604ab96cbc090bb025dc174d28798b86b111ab7cd8a3e77. Dec 16 12:24:14.749312 systemd[1]: Started cri-containerd-148f33b42d112675a22889804fb9417808577d8c88b559f890116739670d594e.scope - libcontainer container 148f33b42d112675a22889804fb9417808577d8c88b559f890116739670d594e. Dec 16 12:24:14.760396 systemd[1]: Started cri-containerd-1433ac656134d0799e619e420f9071583a8935a1b59d59425682b9057ab86cd4.scope - libcontainer container 1433ac656134d0799e619e420f9071583a8935a1b59d59425682b9057ab86cd4. Dec 16 12:24:14.762011 kubelet[2318]: E1216 12:24:14.761970 2318 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:24:14.779084 kubelet[2318]: E1216 12:24:14.778967 2318 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:24:14.788988 containerd[1545]: time="2025-12-16T12:24:14.788826142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3629a053d364a6752f05ea09a7b7eb7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7238776faa3d772f604ab96cbc090bb025dc174d28798b86b111ab7cd8a3e77\"" Dec 16 12:24:14.793046 kubelet[2318]: I1216 12:24:14.792988 2318 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:24:14.793457 kubelet[2318]: E1216 12:24:14.793408 2318 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Dec 16 12:24:14.798730 containerd[1545]: time="2025-12-16T12:24:14.798685982Z" level=info msg="CreateContainer within sandbox \"c7238776faa3d772f604ab96cbc090bb025dc174d28798b86b111ab7cd8a3e77\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:24:14.810850 containerd[1545]: time="2025-12-16T12:24:14.810613262Z" level=info msg="Container 4f515cf8ad98d61d1b20cdc625d5247b19db211779995ac0e0a1d0ded656902f: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:24:14.812346 containerd[1545]: time="2025-12-16T12:24:14.812308742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"148f33b42d112675a22889804fb9417808577d8c88b559f890116739670d594e\"" Dec 16 12:24:14.819656 containerd[1545]: time="2025-12-16T12:24:14.819588702Z" level=info msg="CreateContainer within sandbox \"148f33b42d112675a22889804fb9417808577d8c88b559f890116739670d594e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:24:14.819899 containerd[1545]: time="2025-12-16T12:24:14.819859982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1433ac656134d0799e619e420f9071583a8935a1b59d59425682b9057ab86cd4\"" Dec 16 12:24:14.822858 containerd[1545]: time="2025-12-16T12:24:14.822398582Z" level=info msg="CreateContainer within sandbox \"c7238776faa3d772f604ab96cbc090bb025dc174d28798b86b111ab7cd8a3e77\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4f515cf8ad98d61d1b20cdc625d5247b19db211779995ac0e0a1d0ded656902f\"" Dec 16 12:24:14.823316 containerd[1545]: time="2025-12-16T12:24:14.823230662Z" level=info msg="StartContainer for \"4f515cf8ad98d61d1b20cdc625d5247b19db211779995ac0e0a1d0ded656902f\"" Dec 16 12:24:14.824491 containerd[1545]: time="2025-12-16T12:24:14.824452622Z" level=info msg="connecting to shim 4f515cf8ad98d61d1b20cdc625d5247b19db211779995ac0e0a1d0ded656902f" address="unix:///run/containerd/s/99bdb0bca6902f2a8630b06ed6a35fa1a095574681c4afb10c801e30ffc0be7a" protocol=ttrpc version=3 Dec 16 12:24:14.826475 containerd[1545]: time="2025-12-16T12:24:14.826435982Z" level=info msg="CreateContainer within sandbox \"1433ac656134d0799e619e420f9071583a8935a1b59d59425682b9057ab86cd4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:24:14.836473 containerd[1545]: time="2025-12-16T12:24:14.835913062Z" level=info msg="Container 2801d7af3a48cec22bd443ae7a0204aaa4d37f7aee248b002811214a6fd0b64d: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:24:14.841254 systemd[1]: Started cri-containerd-4f515cf8ad98d61d1b20cdc625d5247b19db211779995ac0e0a1d0ded656902f.scope - libcontainer container 4f515cf8ad98d61d1b20cdc625d5247b19db211779995ac0e0a1d0ded656902f. Dec 16 12:24:14.848224 containerd[1545]: time="2025-12-16T12:24:14.848168262Z" level=info msg="CreateContainer within sandbox \"148f33b42d112675a22889804fb9417808577d8c88b559f890116739670d594e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2801d7af3a48cec22bd443ae7a0204aaa4d37f7aee248b002811214a6fd0b64d\"" Dec 16 12:24:14.849058 containerd[1545]: time="2025-12-16T12:24:14.849017542Z" level=info msg="StartContainer for \"2801d7af3a48cec22bd443ae7a0204aaa4d37f7aee248b002811214a6fd0b64d\"" Dec 16 12:24:14.850433 containerd[1545]: time="2025-12-16T12:24:14.850405382Z" level=info msg="connecting to shim 2801d7af3a48cec22bd443ae7a0204aaa4d37f7aee248b002811214a6fd0b64d" address="unix:///run/containerd/s/757aedf430dbe36fd8b74b93d9071a0a524de9ad6fcea530c25088c919b2a66a" protocol=ttrpc version=3 Dec 16 12:24:14.853104 containerd[1545]: time="2025-12-16T12:24:14.852904422Z" level=info msg="Container 48c3d2dcc33e640aad3c4e6acd5d7b513db761c5c496d21ebf98b066edfbe0be: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:24:14.865146 containerd[1545]: time="2025-12-16T12:24:14.864624862Z" level=info msg="CreateContainer within sandbox \"1433ac656134d0799e619e420f9071583a8935a1b59d59425682b9057ab86cd4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"48c3d2dcc33e640aad3c4e6acd5d7b513db761c5c496d21ebf98b066edfbe0be\"" Dec 16 12:24:14.865722 containerd[1545]: time="2025-12-16T12:24:14.865685142Z" level=info msg="StartContainer for \"48c3d2dcc33e640aad3c4e6acd5d7b513db761c5c496d21ebf98b066edfbe0be\"" Dec 16 12:24:14.867131 containerd[1545]: time="2025-12-16T12:24:14.867086662Z" level=info msg="connecting to shim 48c3d2dcc33e640aad3c4e6acd5d7b513db761c5c496d21ebf98b066edfbe0be" address="unix:///run/containerd/s/11efa3528001ef8fdf62ecb144322c2ee01a0cdaf1bc6f4d6ee52526a2b43c01" protocol=ttrpc version=3 Dec 16 12:24:14.871993 systemd[1]: Started cri-containerd-2801d7af3a48cec22bd443ae7a0204aaa4d37f7aee248b002811214a6fd0b64d.scope - libcontainer container 2801d7af3a48cec22bd443ae7a0204aaa4d37f7aee248b002811214a6fd0b64d. Dec 16 12:24:14.897309 systemd[1]: Started cri-containerd-48c3d2dcc33e640aad3c4e6acd5d7b513db761c5c496d21ebf98b066edfbe0be.scope - libcontainer container 48c3d2dcc33e640aad3c4e6acd5d7b513db761c5c496d21ebf98b066edfbe0be. Dec 16 12:24:14.907786 containerd[1545]: time="2025-12-16T12:24:14.907238502Z" level=info msg="StartContainer for \"4f515cf8ad98d61d1b20cdc625d5247b19db211779995ac0e0a1d0ded656902f\" returns successfully" Dec 16 12:24:14.942266 containerd[1545]: time="2025-12-16T12:24:14.942225902Z" level=info msg="StartContainer for \"2801d7af3a48cec22bd443ae7a0204aaa4d37f7aee248b002811214a6fd0b64d\" returns successfully" Dec 16 12:24:14.959315 containerd[1545]: time="2025-12-16T12:24:14.959194542Z" level=info msg="StartContainer for \"48c3d2dcc33e640aad3c4e6acd5d7b513db761c5c496d21ebf98b066edfbe0be\" returns successfully" Dec 16 12:24:15.257041 kubelet[2318]: E1216 12:24:15.256921 2318 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:24:15.262053 kubelet[2318]: E1216 12:24:15.261975 2318 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:24:15.263047 kubelet[2318]: E1216 12:24:15.263004 2318 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:24:16.267204 kubelet[2318]: E1216 12:24:16.267147 2318 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:24:16.268398 kubelet[2318]: E1216 12:24:16.268371 2318 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:24:16.395233 kubelet[2318]: I1216 12:24:16.395180 2318 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:24:16.432793 kubelet[2318]: E1216 12:24:16.432752 2318 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 16 12:24:16.543203 kubelet[2318]: I1216 12:24:16.543003 2318 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:24:16.543203 kubelet[2318]: E1216 12:24:16.543085 2318 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 16 12:24:16.562012 kubelet[2318]: E1216 12:24:16.561898 2318 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:24:16.662793 kubelet[2318]: E1216 12:24:16.662679 2318 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:24:16.763712 kubelet[2318]: E1216 12:24:16.763670 2318 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:24:16.864599 kubelet[2318]: E1216 12:24:16.864483 2318 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:24:16.965609 kubelet[2318]: E1216 12:24:16.965570 2318 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:24:17.066296 kubelet[2318]: E1216 12:24:17.066259 2318 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:24:17.118573 kubelet[2318]: I1216 12:24:17.118470 2318 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:17.125306 kubelet[2318]: E1216 12:24:17.125254 2318 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:17.125306 kubelet[2318]: I1216 12:24:17.125307 2318 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:17.127681 kubelet[2318]: E1216 12:24:17.127653 2318 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:17.127681 kubelet[2318]: I1216 12:24:17.127680 2318 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:24:17.129373 kubelet[2318]: E1216 12:24:17.129350 2318 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 12:24:17.206806 kubelet[2318]: I1216 12:24:17.206775 2318 apiserver.go:52] "Watching apiserver" Dec 16 12:24:17.217539 kubelet[2318]: I1216 12:24:17.217499 2318 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 12:24:17.744095 kubelet[2318]: I1216 12:24:17.743768 2318 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:18.727340 systemd[1]: Reload requested from client PID 2612 ('systemctl') (unit session-7.scope)... Dec 16 12:24:18.727375 systemd[1]: Reloading... Dec 16 12:24:18.846123 zram_generator::config[2658]: No configuration found. Dec 16 12:24:19.046268 systemd[1]: Reloading finished in 318 ms. Dec 16 12:24:19.074373 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:24:19.090053 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:24:19.090340 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:24:19.090412 systemd[1]: kubelet.service: Consumed 1.200s CPU time, 121M memory peak. Dec 16 12:24:19.092446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:24:19.263937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:24:19.269555 (kubelet)[2697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:24:19.311295 kubelet[2697]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:24:19.311295 kubelet[2697]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:24:19.311295 kubelet[2697]: I1216 12:24:19.311041 2697 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:24:19.325718 kubelet[2697]: I1216 12:24:19.325581 2697 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 12:24:19.325718 kubelet[2697]: I1216 12:24:19.325616 2697 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:24:19.325920 kubelet[2697]: I1216 12:24:19.325907 2697 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 12:24:19.325968 kubelet[2697]: I1216 12:24:19.325957 2697 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:24:19.326335 kubelet[2697]: I1216 12:24:19.326314 2697 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:24:19.328246 kubelet[2697]: I1216 12:24:19.328216 2697 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:24:19.332810 kubelet[2697]: I1216 12:24:19.332775 2697 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:24:19.337295 kubelet[2697]: I1216 12:24:19.337131 2697 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:24:19.340112 kubelet[2697]: I1216 12:24:19.340073 2697 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 12:24:19.340364 kubelet[2697]: I1216 12:24:19.340300 2697 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:24:19.340546 kubelet[2697]: I1216 12:24:19.340351 2697 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:24:19.340546 kubelet[2697]: I1216 12:24:19.340533 2697 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:24:19.340546 kubelet[2697]: I1216 12:24:19.340543 2697 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 12:24:19.340667 kubelet[2697]: I1216 12:24:19.340566 2697 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 12:24:19.341664 kubelet[2697]: I1216 12:24:19.341626 2697 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:24:19.341826 kubelet[2697]: I1216 12:24:19.341800 2697 kubelet.go:475] "Attempting to sync node with API server" Dec 16 12:24:19.341826 kubelet[2697]: I1216 12:24:19.341825 2697 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:24:19.341892 kubelet[2697]: I1216 12:24:19.341848 2697 kubelet.go:387] "Adding apiserver pod source" Dec 16 12:24:19.341892 kubelet[2697]: I1216 12:24:19.341861 2697 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:24:19.344050 kubelet[2697]: I1216 12:24:19.343815 2697 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:24:19.346713 kubelet[2697]: I1216 12:24:19.346675 2697 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:24:19.348083 kubelet[2697]: I1216 12:24:19.348064 2697 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 12:24:19.354183 kubelet[2697]: I1216 12:24:19.354152 2697 server.go:1262] "Started kubelet" Dec 16 12:24:19.354951 kubelet[2697]: I1216 12:24:19.354480 2697 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:24:19.355973 kubelet[2697]: I1216 12:24:19.355929 2697 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:24:19.356883 kubelet[2697]: I1216 12:24:19.356560 2697 server.go:310] "Adding debug handlers to kubelet server" Dec 16 12:24:19.357109 kubelet[2697]: I1216 12:24:19.354550 2697 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:24:19.357170 kubelet[2697]: I1216 12:24:19.357142 2697 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 12:24:19.357357 kubelet[2697]: I1216 12:24:19.357337 2697 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:24:19.358702 kubelet[2697]: I1216 12:24:19.358641 2697 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:24:19.361876 kubelet[2697]: E1216 12:24:19.361322 2697 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:24:19.361876 kubelet[2697]: I1216 12:24:19.361366 2697 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 12:24:19.361876 kubelet[2697]: I1216 12:24:19.361530 2697 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 12:24:19.361876 kubelet[2697]: I1216 12:24:19.361689 2697 reconciler.go:29] "Reconciler: start to sync state" Dec 16 12:24:19.362303 kubelet[2697]: I1216 12:24:19.362249 2697 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:24:19.362372 kubelet[2697]: I1216 12:24:19.362343 2697 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:24:19.363868 kubelet[2697]: E1216 12:24:19.363830 2697 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:24:19.369729 kubelet[2697]: I1216 12:24:19.369680 2697 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:24:19.373444 kubelet[2697]: I1216 12:24:19.373207 2697 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 12:24:19.380598 kubelet[2697]: I1216 12:24:19.380566 2697 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 12:24:19.380763 kubelet[2697]: I1216 12:24:19.380751 2697 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 12:24:19.380856 kubelet[2697]: I1216 12:24:19.380846 2697 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 12:24:19.380978 kubelet[2697]: E1216 12:24:19.380949 2697 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:24:19.413599 kubelet[2697]: I1216 12:24:19.413569 2697 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:24:19.413599 kubelet[2697]: I1216 12:24:19.413591 2697 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:24:19.413757 kubelet[2697]: I1216 12:24:19.413614 2697 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:24:19.413757 kubelet[2697]: I1216 12:24:19.413746 2697 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:24:19.413799 kubelet[2697]: I1216 12:24:19.413756 2697 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:24:19.413799 kubelet[2697]: I1216 12:24:19.413774 2697 policy_none.go:49] "None policy: Start" Dec 16 12:24:19.414772 kubelet[2697]: I1216 12:24:19.414745 2697 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 12:24:19.414772 kubelet[2697]: I1216 12:24:19.414790 2697 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 12:24:19.414987 kubelet[2697]: I1216 12:24:19.414970 2697 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 12:24:19.414987 kubelet[2697]: I1216 12:24:19.414986 2697 policy_none.go:47] "Start" Dec 16 12:24:19.419507 kubelet[2697]: E1216 12:24:19.418734 2697 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:24:19.419507 kubelet[2697]: I1216 12:24:19.418929 2697 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:24:19.419507 kubelet[2697]: I1216 12:24:19.418943 2697 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:24:19.419507 kubelet[2697]: I1216 12:24:19.419166 2697 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:24:19.420852 kubelet[2697]: E1216 12:24:19.419975 2697 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:24:19.482703 kubelet[2697]: I1216 12:24:19.482648 2697 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:19.482853 kubelet[2697]: I1216 12:24:19.482795 2697 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:19.482913 kubelet[2697]: I1216 12:24:19.482662 2697 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:24:19.521050 kubelet[2697]: I1216 12:24:19.521007 2697 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:24:19.563219 kubelet[2697]: I1216 12:24:19.563049 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3629a053d364a6752f05ea09a7b7eb7e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3629a053d364a6752f05ea09a7b7eb7e\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:19.563219 kubelet[2697]: I1216 12:24:19.563085 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:19.563219 kubelet[2697]: I1216 12:24:19.563112 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:19.563219 kubelet[2697]: I1216 12:24:19.563126 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3629a053d364a6752f05ea09a7b7eb7e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3629a053d364a6752f05ea09a7b7eb7e\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:19.563219 kubelet[2697]: I1216 12:24:19.563143 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3629a053d364a6752f05ea09a7b7eb7e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3629a053d364a6752f05ea09a7b7eb7e\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:19.564090 kubelet[2697]: I1216 12:24:19.563160 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:19.564090 kubelet[2697]: I1216 12:24:19.563183 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:19.565358 kubelet[2697]: E1216 12:24:19.565288 2697 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:19.566365 kubelet[2697]: I1216 12:24:19.566329 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:24:19.566465 kubelet[2697]: I1216 12:24:19.566415 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:24:19.569168 kubelet[2697]: I1216 12:24:19.569143 2697 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 12:24:19.569259 kubelet[2697]: I1216 12:24:19.569243 2697 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:24:19.723582 sudo[2734]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 16 12:24:19.723876 sudo[2734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 16 12:24:20.057235 sudo[2734]: pam_unix(sudo:session): session closed for user root Dec 16 12:24:20.343591 kubelet[2697]: I1216 12:24:20.343324 2697 apiserver.go:52] "Watching apiserver" Dec 16 12:24:20.361894 kubelet[2697]: I1216 12:24:20.361821 2697 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 12:24:20.393569 kubelet[2697]: I1216 12:24:20.393523 2697 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:20.394838 kubelet[2697]: I1216 12:24:20.394792 2697 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:24:20.402060 kubelet[2697]: E1216 12:24:20.400091 2697 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 12:24:20.403974 kubelet[2697]: E1216 12:24:20.403938 2697 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 12:24:20.417700 kubelet[2697]: I1216 12:24:20.417632 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.417614871 podStartE2EDuration="1.417614871s" podCreationTimestamp="2025-12-16 12:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:24:20.417109468 +0000 UTC m=+1.143555727" watchObservedRunningTime="2025-12-16 12:24:20.417614871 +0000 UTC m=+1.144061090" Dec 16 12:24:20.442441 kubelet[2697]: I1216 12:24:20.442279 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.442258503 podStartE2EDuration="3.442258503s" podCreationTimestamp="2025-12-16 12:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:24:20.442048062 +0000 UTC m=+1.168494321" watchObservedRunningTime="2025-12-16 12:24:20.442258503 +0000 UTC m=+1.168704762" Dec 16 12:24:20.442670 kubelet[2697]: I1216 12:24:20.442459 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.442453305 podStartE2EDuration="1.442453305s" podCreationTimestamp="2025-12-16 12:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:24:20.429490865 +0000 UTC m=+1.155937124" watchObservedRunningTime="2025-12-16 12:24:20.442453305 +0000 UTC m=+1.168899604" Dec 16 12:24:22.281943 sudo[1745]: pam_unix(sudo:session): session closed for user root Dec 16 12:24:22.283926 sshd[1744]: Connection closed by 10.0.0.1 port 35302 Dec 16 12:24:22.284548 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Dec 16 12:24:22.289141 systemd[1]: sshd@6-10.0.0.37:22-10.0.0.1:35302.service: Deactivated successfully. Dec 16 12:24:22.292212 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:24:22.292433 systemd[1]: session-7.scope: Consumed 7.232s CPU time, 259.5M memory peak. Dec 16 12:24:22.296115 systemd-logind[1519]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:24:22.297650 systemd-logind[1519]: Removed session 7. Dec 16 12:24:25.877363 kubelet[2697]: I1216 12:24:25.877330 2697 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:24:25.878040 containerd[1545]: time="2025-12-16T12:24:25.877912922Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:24:25.878292 kubelet[2697]: I1216 12:24:25.878102 2697 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:24:26.991352 systemd[1]: Created slice kubepods-besteffort-pode7ae720a_244e_49a7_9900_d8608a8238fb.slice - libcontainer container kubepods-besteffort-pode7ae720a_244e_49a7_9900_d8608a8238fb.slice. Dec 16 12:24:27.007821 systemd[1]: Created slice kubepods-burstable-poddd123cd4_53e4_479e_a35b_b4335c79f686.slice - libcontainer container kubepods-burstable-poddd123cd4_53e4_479e_a35b_b4335c79f686.slice. Dec 16 12:24:27.014636 kubelet[2697]: I1216 12:24:27.014576 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd123cd4-53e4-479e-a35b-b4335c79f686-hubble-tls\") pod \"cilium-dxttg\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " pod="kube-system/cilium-dxttg" Dec 16 12:24:27.014636 kubelet[2697]: I1216 12:24:27.014622 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e7ae720a-244e-49a7-9900-d8608a8238fb-kube-proxy\") pod \"kube-proxy-n427z\" (UID: \"e7ae720a-244e-49a7-9900-d8608a8238fb\") " pod="kube-system/kube-proxy-n427z" Dec 16 12:24:27.014636 kubelet[2697]: I1216 12:24:27.014641 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-host-proc-sys-kernel\") pod \"cilium-dxttg\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " pod="kube-system/cilium-dxttg" Dec 16 12:24:27.015008 kubelet[2697]: I1216 12:24:27.014656 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t48mx\" (UniqueName: \"kubernetes.io/projected/dd123cd4-53e4-479e-a35b-b4335c79f686-kube-api-access-t48mx\") pod \"cilium-dxttg\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " pod="kube-system/cilium-dxttg" Dec 16 12:24:27.015008 kubelet[2697]: I1216 12:24:27.014675 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7ae720a-244e-49a7-9900-d8608a8238fb-xtables-lock\") pod \"kube-proxy-n427z\" (UID: \"e7ae720a-244e-49a7-9900-d8608a8238fb\") " pod="kube-system/kube-proxy-n427z" Dec 16 12:24:27.015008 kubelet[2697]: I1216 12:24:27.014691 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlw79\" (UniqueName: \"kubernetes.io/projected/e7ae720a-244e-49a7-9900-d8608a8238fb-kube-api-access-dlw79\") pod \"kube-proxy-n427z\" (UID: \"e7ae720a-244e-49a7-9900-d8608a8238fb\") " pod="kube-system/kube-proxy-n427z" Dec 16 12:24:27.015008 kubelet[2697]: I1216 12:24:27.014708 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-cilium-run\") pod \"cilium-dxttg\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " pod="kube-system/cilium-dxttg" Dec 16 12:24:27.015008 kubelet[2697]: I1216 12:24:27.014722 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-bpf-maps\") pod \"cilium-dxttg\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " pod="kube-system/cilium-dxttg" Dec 16 12:24:27.015008 kubelet[2697]: I1216 12:24:27.014736 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-cni-path\") pod \"cilium-dxttg\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " pod="kube-system/cilium-dxttg" Dec 16 12:24:27.015187 kubelet[2697]: I1216 12:24:27.014750 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-etc-cni-netd\") pod \"cilium-dxttg\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " pod="kube-system/cilium-dxttg" Dec 16 12:24:27.015187 kubelet[2697]: I1216 12:24:27.014765 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-hostproc\") pod \"cilium-dxttg\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " pod="kube-system/cilium-dxttg" Dec 16 12:24:27.015187 kubelet[2697]: I1216 12:24:27.014779 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-cilium-cgroup\") pod \"cilium-dxttg\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " pod="kube-system/cilium-dxttg" Dec 16 12:24:27.015187 kubelet[2697]: I1216 12:24:27.014822 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-lib-modules\") pod \"cilium-dxttg\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " pod="kube-system/cilium-dxttg" Dec 16 12:24:27.015187 kubelet[2697]: I1216 12:24:27.014861 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd123cd4-53e4-479e-a35b-b4335c79f686-clustermesh-secrets\") pod \"cilium-dxttg\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " pod="kube-system/cilium-dxttg" Dec 16 12:24:27.015187 kubelet[2697]: I1216 12:24:27.014883 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd123cd4-53e4-479e-a35b-b4335c79f686-cilium-config-path\") pod \"cilium-dxttg\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " pod="kube-system/cilium-dxttg" Dec 16 12:24:27.015336 kubelet[2697]: I1216 12:24:27.014912 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7ae720a-244e-49a7-9900-d8608a8238fb-lib-modules\") pod \"kube-proxy-n427z\" (UID: \"e7ae720a-244e-49a7-9900-d8608a8238fb\") " pod="kube-system/kube-proxy-n427z" Dec 16 12:24:27.015336 kubelet[2697]: I1216 12:24:27.014939 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-xtables-lock\") pod \"cilium-dxttg\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " pod="kube-system/cilium-dxttg" Dec 16 12:24:27.015336 kubelet[2697]: I1216 12:24:27.014959 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-host-proc-sys-net\") pod \"cilium-dxttg\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " pod="kube-system/cilium-dxttg" Dec 16 12:24:27.115509 kubelet[2697]: E1216 12:24:27.115456 2697 status_manager.go:1018] "Failed to get status for pod" err="pods \"cilium-operator-6f9c7c5859-4n5lv\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" podUID="750e6e3b-7377-448e-a5a9-554b82c83939" pod="kube-system/cilium-operator-6f9c7c5859-4n5lv" Dec 16 12:24:27.118709 systemd[1]: Created slice kubepods-besteffort-pod750e6e3b_7377_448e_a5a9_554b82c83939.slice - libcontainer container kubepods-besteffort-pod750e6e3b_7377_448e_a5a9_554b82c83939.slice. Dec 16 12:24:27.219103 kubelet[2697]: I1216 12:24:27.219051 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2mmx\" (UniqueName: \"kubernetes.io/projected/750e6e3b-7377-448e-a5a9-554b82c83939-kube-api-access-d2mmx\") pod \"cilium-operator-6f9c7c5859-4n5lv\" (UID: \"750e6e3b-7377-448e-a5a9-554b82c83939\") " pod="kube-system/cilium-operator-6f9c7c5859-4n5lv" Dec 16 12:24:27.219103 kubelet[2697]: I1216 12:24:27.219096 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/750e6e3b-7377-448e-a5a9-554b82c83939-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-4n5lv\" (UID: \"750e6e3b-7377-448e-a5a9-554b82c83939\") " pod="kube-system/cilium-operator-6f9c7c5859-4n5lv" Dec 16 12:24:27.306075 containerd[1545]: time="2025-12-16T12:24:27.305943177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n427z,Uid:e7ae720a-244e-49a7-9900-d8608a8238fb,Namespace:kube-system,Attempt:0,}" Dec 16 12:24:27.313123 containerd[1545]: time="2025-12-16T12:24:27.313086126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dxttg,Uid:dd123cd4-53e4-479e-a35b-b4335c79f686,Namespace:kube-system,Attempt:0,}" Dec 16 12:24:27.327965 containerd[1545]: time="2025-12-16T12:24:27.327815063Z" level=info msg="connecting to shim 30a1c9ff62fa97a65cd4e7348e36188824b75a4997d87b09f15251af47a32a9d" address="unix:///run/containerd/s/fbdc68762ed06b2672c8bfbb85ff4186bd4dab34a9afc5f16dff5dfa6750d66b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:24:27.338309 containerd[1545]: time="2025-12-16T12:24:27.338249824Z" level=info msg="connecting to shim cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312" address="unix:///run/containerd/s/ec404da7de6e3c50d8dc003282c5db1d38a3595a7fdc25d6bccdf7e188e29f68" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:24:27.359255 systemd[1]: Started cri-containerd-cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312.scope - libcontainer container cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312. Dec 16 12:24:27.362871 systemd[1]: Started cri-containerd-30a1c9ff62fa97a65cd4e7348e36188824b75a4997d87b09f15251af47a32a9d.scope - libcontainer container 30a1c9ff62fa97a65cd4e7348e36188824b75a4997d87b09f15251af47a32a9d. Dec 16 12:24:27.394340 containerd[1545]: time="2025-12-16T12:24:27.394269044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dxttg,Uid:dd123cd4-53e4-479e-a35b-b4335c79f686,Namespace:kube-system,Attempt:0,} returns sandbox id \"cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312\"" Dec 16 12:24:27.395912 containerd[1545]: time="2025-12-16T12:24:27.395869491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n427z,Uid:e7ae720a-244e-49a7-9900-d8608a8238fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"30a1c9ff62fa97a65cd4e7348e36188824b75a4997d87b09f15251af47a32a9d\"" Dec 16 12:24:27.397149 containerd[1545]: time="2025-12-16T12:24:27.397016935Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 16 12:24:27.409051 containerd[1545]: time="2025-12-16T12:24:27.408377260Z" level=info msg="CreateContainer within sandbox \"30a1c9ff62fa97a65cd4e7348e36188824b75a4997d87b09f15251af47a32a9d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:24:27.417740 containerd[1545]: time="2025-12-16T12:24:27.417679376Z" level=info msg="Container 17a4b1d99fd5cc47f352e7e1c22d34746150ed6b0b7ef241138f760b8bc3783e: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:24:27.426110 containerd[1545]: time="2025-12-16T12:24:27.426061529Z" level=info msg="CreateContainer within sandbox \"30a1c9ff62fa97a65cd4e7348e36188824b75a4997d87b09f15251af47a32a9d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"17a4b1d99fd5cc47f352e7e1c22d34746150ed6b0b7ef241138f760b8bc3783e\"" Dec 16 12:24:27.427432 containerd[1545]: time="2025-12-16T12:24:27.426843132Z" level=info msg="StartContainer for \"17a4b1d99fd5cc47f352e7e1c22d34746150ed6b0b7ef241138f760b8bc3783e\"" Dec 16 12:24:27.428794 containerd[1545]: time="2025-12-16T12:24:27.428740700Z" level=info msg="connecting to shim 17a4b1d99fd5cc47f352e7e1c22d34746150ed6b0b7ef241138f760b8bc3783e" address="unix:///run/containerd/s/fbdc68762ed06b2672c8bfbb85ff4186bd4dab34a9afc5f16dff5dfa6750d66b" protocol=ttrpc version=3 Dec 16 12:24:27.445046 containerd[1545]: time="2025-12-16T12:24:27.444769163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-4n5lv,Uid:750e6e3b-7377-448e-a5a9-554b82c83939,Namespace:kube-system,Attempt:0,}" Dec 16 12:24:27.452285 systemd[1]: Started cri-containerd-17a4b1d99fd5cc47f352e7e1c22d34746150ed6b0b7ef241138f760b8bc3783e.scope - libcontainer container 17a4b1d99fd5cc47f352e7e1c22d34746150ed6b0b7ef241138f760b8bc3783e. Dec 16 12:24:27.461360 containerd[1545]: time="2025-12-16T12:24:27.461185947Z" level=info msg="connecting to shim 787a7489d25c4ff9bdffda824b2367a8381aa6029e7ebc22091f2b8f1f97b4c6" address="unix:///run/containerd/s/6f7058a3cb23c5a7aa9ae6c06fc280b85f6faf5d04939ce0d2c6c45bab20b30f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:24:27.490252 systemd[1]: Started cri-containerd-787a7489d25c4ff9bdffda824b2367a8381aa6029e7ebc22091f2b8f1f97b4c6.scope - libcontainer container 787a7489d25c4ff9bdffda824b2367a8381aa6029e7ebc22091f2b8f1f97b4c6. Dec 16 12:24:27.609617 containerd[1545]: time="2025-12-16T12:24:27.609507969Z" level=info msg="StartContainer for \"17a4b1d99fd5cc47f352e7e1c22d34746150ed6b0b7ef241138f760b8bc3783e\" returns successfully" Dec 16 12:24:27.653544 containerd[1545]: time="2025-12-16T12:24:27.652698859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-4n5lv,Uid:750e6e3b-7377-448e-a5a9-554b82c83939,Namespace:kube-system,Attempt:0,} returns sandbox id \"787a7489d25c4ff9bdffda824b2367a8381aa6029e7ebc22091f2b8f1f97b4c6\"" Dec 16 12:24:28.436958 kubelet[2697]: I1216 12:24:28.436719 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n427z" podStartSLOduration=2.436703191 podStartE2EDuration="2.436703191s" podCreationTimestamp="2025-12-16 12:24:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:24:28.43667391 +0000 UTC m=+9.163120209" watchObservedRunningTime="2025-12-16 12:24:28.436703191 +0000 UTC m=+9.163149450" Dec 16 12:24:32.892617 update_engine[1524]: I20251216 12:24:32.892431 1524 update_attempter.cc:509] Updating boot flags... Dec 16 12:24:36.243540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1945574058.mount: Deactivated successfully. Dec 16 12:24:37.658441 containerd[1545]: time="2025-12-16T12:24:37.658361890Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:37.659227 containerd[1545]: time="2025-12-16T12:24:37.658939131Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Dec 16 12:24:37.659947 containerd[1545]: time="2025-12-16T12:24:37.659912573Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:37.661448 containerd[1545]: time="2025-12-16T12:24:37.661409256Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.264333401s" Dec 16 12:24:37.661448 containerd[1545]: time="2025-12-16T12:24:37.661446776Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 16 12:24:37.662629 containerd[1545]: time="2025-12-16T12:24:37.662593579Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 16 12:24:37.667882 containerd[1545]: time="2025-12-16T12:24:37.667832149Z" level=info msg="CreateContainer within sandbox \"cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 12:24:37.678377 containerd[1545]: time="2025-12-16T12:24:37.676579447Z" level=info msg="Container 5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:24:37.681675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3022805349.mount: Deactivated successfully. Dec 16 12:24:37.688359 containerd[1545]: time="2025-12-16T12:24:37.688289631Z" level=info msg="CreateContainer within sandbox \"cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90\"" Dec 16 12:24:37.689253 containerd[1545]: time="2025-12-16T12:24:37.689179193Z" level=info msg="StartContainer for \"5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90\"" Dec 16 12:24:37.690716 containerd[1545]: time="2025-12-16T12:24:37.690660036Z" level=info msg="connecting to shim 5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90" address="unix:///run/containerd/s/ec404da7de6e3c50d8dc003282c5db1d38a3595a7fdc25d6bccdf7e188e29f68" protocol=ttrpc version=3 Dec 16 12:24:37.741273 systemd[1]: Started cri-containerd-5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90.scope - libcontainer container 5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90. Dec 16 12:24:37.778406 containerd[1545]: time="2025-12-16T12:24:37.778354697Z" level=info msg="StartContainer for \"5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90\" returns successfully" Dec 16 12:24:37.792549 systemd[1]: cri-containerd-5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90.scope: Deactivated successfully. Dec 16 12:24:37.831706 containerd[1545]: time="2025-12-16T12:24:37.831638887Z" level=info msg="received container exit event container_id:\"5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90\" id:\"5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90\" pid:3147 exited_at:{seconds:1765887877 nanos:819671582}" Dec 16 12:24:37.880985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90-rootfs.mount: Deactivated successfully. Dec 16 12:24:38.449744 containerd[1545]: time="2025-12-16T12:24:38.449700542Z" level=info msg="CreateContainer within sandbox \"cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 12:24:38.458886 containerd[1545]: time="2025-12-16T12:24:38.458832359Z" level=info msg="Container a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:24:38.470445 containerd[1545]: time="2025-12-16T12:24:38.470233501Z" level=info msg="CreateContainer within sandbox \"cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29\"" Dec 16 12:24:38.473009 containerd[1545]: time="2025-12-16T12:24:38.472879906Z" level=info msg="StartContainer for \"a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29\"" Dec 16 12:24:38.474891 containerd[1545]: time="2025-12-16T12:24:38.474851550Z" level=info msg="connecting to shim a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29" address="unix:///run/containerd/s/ec404da7de6e3c50d8dc003282c5db1d38a3595a7fdc25d6bccdf7e188e29f68" protocol=ttrpc version=3 Dec 16 12:24:38.511263 systemd[1]: Started cri-containerd-a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29.scope - libcontainer container a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29. Dec 16 12:24:38.540293 containerd[1545]: time="2025-12-16T12:24:38.540255276Z" level=info msg="StartContainer for \"a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29\" returns successfully" Dec 16 12:24:38.553373 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:24:38.553609 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:24:38.553839 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:24:38.555521 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:24:38.557390 systemd[1]: cri-containerd-a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29.scope: Deactivated successfully. Dec 16 12:24:38.565871 containerd[1545]: time="2025-12-16T12:24:38.565455405Z" level=info msg="received container exit event container_id:\"a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29\" id:\"a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29\" pid:3197 exited_at:{seconds:1765887878 nanos:563401081}" Dec 16 12:24:38.591622 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:24:39.056661 containerd[1545]: time="2025-12-16T12:24:39.056579867Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:39.057455 containerd[1545]: time="2025-12-16T12:24:39.057425348Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Dec 16 12:24:39.058957 containerd[1545]: time="2025-12-16T12:24:39.058917311Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:24:39.061469 containerd[1545]: time="2025-12-16T12:24:39.061426315Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.398784656s" Dec 16 12:24:39.061527 containerd[1545]: time="2025-12-16T12:24:39.061469315Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 16 12:24:39.066992 containerd[1545]: time="2025-12-16T12:24:39.066947365Z" level=info msg="CreateContainer within sandbox \"787a7489d25c4ff9bdffda824b2367a8381aa6029e7ebc22091f2b8f1f97b4c6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 16 12:24:39.077670 containerd[1545]: time="2025-12-16T12:24:39.077614385Z" level=info msg="Container 3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:24:39.085996 containerd[1545]: time="2025-12-16T12:24:39.085952200Z" level=info msg="CreateContainer within sandbox \"787a7489d25c4ff9bdffda824b2367a8381aa6029e7ebc22091f2b8f1f97b4c6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742\"" Dec 16 12:24:39.086514 containerd[1545]: time="2025-12-16T12:24:39.086481681Z" level=info msg="StartContainer for \"3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742\"" Dec 16 12:24:39.087536 containerd[1545]: time="2025-12-16T12:24:39.087498123Z" level=info msg="connecting to shim 3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742" address="unix:///run/containerd/s/6f7058a3cb23c5a7aa9ae6c06fc280b85f6faf5d04939ce0d2c6c45bab20b30f" protocol=ttrpc version=3 Dec 16 12:24:39.110245 systemd[1]: Started cri-containerd-3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742.scope - libcontainer container 3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742. Dec 16 12:24:39.141104 containerd[1545]: time="2025-12-16T12:24:39.139338336Z" level=info msg="StartContainer for \"3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742\" returns successfully" Dec 16 12:24:39.457002 containerd[1545]: time="2025-12-16T12:24:39.456960391Z" level=info msg="CreateContainer within sandbox \"cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 12:24:39.485067 containerd[1545]: time="2025-12-16T12:24:39.482887078Z" level=info msg="Container 06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:24:39.502577 kubelet[2697]: I1216 12:24:39.502505 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-4n5lv" podStartSLOduration=1.095066664 podStartE2EDuration="12.502486954s" podCreationTimestamp="2025-12-16 12:24:27 +0000 UTC" firstStartedPulling="2025-12-16 12:24:27.654661947 +0000 UTC m=+8.381108166" lastFinishedPulling="2025-12-16 12:24:39.062082197 +0000 UTC m=+19.788528456" observedRunningTime="2025-12-16 12:24:39.502465274 +0000 UTC m=+20.228911533" watchObservedRunningTime="2025-12-16 12:24:39.502486954 +0000 UTC m=+20.228933213" Dec 16 12:24:39.525657 containerd[1545]: time="2025-12-16T12:24:39.525567875Z" level=info msg="CreateContainer within sandbox \"cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329\"" Dec 16 12:24:39.527423 containerd[1545]: time="2025-12-16T12:24:39.526701957Z" level=info msg="StartContainer for \"06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329\"" Dec 16 12:24:39.528291 containerd[1545]: time="2025-12-16T12:24:39.528251480Z" level=info msg="connecting to shim 06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329" address="unix:///run/containerd/s/ec404da7de6e3c50d8dc003282c5db1d38a3595a7fdc25d6bccdf7e188e29f68" protocol=ttrpc version=3 Dec 16 12:24:39.550254 systemd[1]: Started cri-containerd-06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329.scope - libcontainer container 06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329. Dec 16 12:24:39.643193 systemd[1]: cri-containerd-06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329.scope: Deactivated successfully. Dec 16 12:24:39.666180 containerd[1545]: time="2025-12-16T12:24:39.666122730Z" level=info msg="received container exit event container_id:\"06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329\" id:\"06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329\" pid:3295 exited_at:{seconds:1765887879 nanos:646177294}" Dec 16 12:24:39.681732 containerd[1545]: time="2025-12-16T12:24:39.681668318Z" level=info msg="StartContainer for \"06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329\" returns successfully" Dec 16 12:24:39.711010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329-rootfs.mount: Deactivated successfully. Dec 16 12:24:40.484989 containerd[1545]: time="2025-12-16T12:24:40.484754997Z" level=info msg="CreateContainer within sandbox \"cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 12:24:40.536733 containerd[1545]: time="2025-12-16T12:24:40.535793803Z" level=info msg="Container 496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:24:40.537353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2368468904.mount: Deactivated successfully. Dec 16 12:24:40.547670 containerd[1545]: time="2025-12-16T12:24:40.547600263Z" level=info msg="CreateContainer within sandbox \"cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea\"" Dec 16 12:24:40.548248 containerd[1545]: time="2025-12-16T12:24:40.548210104Z" level=info msg="StartContainer for \"496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea\"" Dec 16 12:24:40.549485 containerd[1545]: time="2025-12-16T12:24:40.549432707Z" level=info msg="connecting to shim 496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea" address="unix:///run/containerd/s/ec404da7de6e3c50d8dc003282c5db1d38a3595a7fdc25d6bccdf7e188e29f68" protocol=ttrpc version=3 Dec 16 12:24:40.581329 systemd[1]: Started cri-containerd-496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea.scope - libcontainer container 496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea. Dec 16 12:24:40.631863 systemd[1]: cri-containerd-496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea.scope: Deactivated successfully. Dec 16 12:24:40.634302 containerd[1545]: time="2025-12-16T12:24:40.634254610Z" level=info msg="received container exit event container_id:\"496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea\" id:\"496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea\" pid:3334 exited_at:{seconds:1765887880 nanos:633817930}" Dec 16 12:24:40.638400 containerd[1545]: time="2025-12-16T12:24:40.638352577Z" level=info msg="StartContainer for \"496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea\" returns successfully" Dec 16 12:24:41.495194 containerd[1545]: time="2025-12-16T12:24:41.495139139Z" level=info msg="CreateContainer within sandbox \"cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 12:24:41.515863 containerd[1545]: time="2025-12-16T12:24:41.515715932Z" level=info msg="Container 3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:24:41.525565 containerd[1545]: time="2025-12-16T12:24:41.525521027Z" level=info msg="CreateContainer within sandbox \"cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f\"" Dec 16 12:24:41.527444 containerd[1545]: time="2025-12-16T12:24:41.527403070Z" level=info msg="StartContainer for \"3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f\"" Dec 16 12:24:41.528610 containerd[1545]: time="2025-12-16T12:24:41.528572472Z" level=info msg="connecting to shim 3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f" address="unix:///run/containerd/s/ec404da7de6e3c50d8dc003282c5db1d38a3595a7fdc25d6bccdf7e188e29f68" protocol=ttrpc version=3 Dec 16 12:24:41.563279 systemd[1]: Started cri-containerd-3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f.scope - libcontainer container 3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f. Dec 16 12:24:41.611051 containerd[1545]: time="2025-12-16T12:24:41.610928643Z" level=info msg="StartContainer for \"3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f\" returns successfully" Dec 16 12:24:41.757401 kubelet[2697]: I1216 12:24:41.757177 2697 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 12:24:41.820043 systemd[1]: Created slice kubepods-burstable-podc27ae4d2_4265_4edf_8137_be5094081858.slice - libcontainer container kubepods-burstable-podc27ae4d2_4265_4edf_8137_be5094081858.slice. Dec 16 12:24:41.824922 systemd[1]: Created slice kubepods-burstable-podc4e078c1_19fe_4b1e_be3b_b4490714de93.slice - libcontainer container kubepods-burstable-podc4e078c1_19fe_4b1e_be3b_b4490714de93.slice. Dec 16 12:24:41.927967 kubelet[2697]: I1216 12:24:41.927913 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4e078c1-19fe-4b1e-be3b-b4490714de93-config-volume\") pod \"coredns-66bc5c9577-kmnn6\" (UID: \"c4e078c1-19fe-4b1e-be3b-b4490714de93\") " pod="kube-system/coredns-66bc5c9577-kmnn6" Dec 16 12:24:41.927967 kubelet[2697]: I1216 12:24:41.927970 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h56md\" (UniqueName: \"kubernetes.io/projected/c4e078c1-19fe-4b1e-be3b-b4490714de93-kube-api-access-h56md\") pod \"coredns-66bc5c9577-kmnn6\" (UID: \"c4e078c1-19fe-4b1e-be3b-b4490714de93\") " pod="kube-system/coredns-66bc5c9577-kmnn6" Dec 16 12:24:41.928165 kubelet[2697]: I1216 12:24:41.927991 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c27ae4d2-4265-4edf-8137-be5094081858-config-volume\") pod \"coredns-66bc5c9577-wzptr\" (UID: \"c27ae4d2-4265-4edf-8137-be5094081858\") " pod="kube-system/coredns-66bc5c9577-wzptr" Dec 16 12:24:41.928165 kubelet[2697]: I1216 12:24:41.928013 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj97v\" (UniqueName: \"kubernetes.io/projected/c27ae4d2-4265-4edf-8137-be5094081858-kube-api-access-lj97v\") pod \"coredns-66bc5c9577-wzptr\" (UID: \"c27ae4d2-4265-4edf-8137-be5094081858\") " pod="kube-system/coredns-66bc5c9577-wzptr" Dec 16 12:24:42.129726 containerd[1545]: time="2025-12-16T12:24:42.129539975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wzptr,Uid:c27ae4d2-4265-4edf-8137-be5094081858,Namespace:kube-system,Attempt:0,}" Dec 16 12:24:42.137498 containerd[1545]: time="2025-12-16T12:24:42.137416747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kmnn6,Uid:c4e078c1-19fe-4b1e-be3b-b4490714de93,Namespace:kube-system,Attempt:0,}" Dec 16 12:24:42.517903 kubelet[2697]: I1216 12:24:42.517764 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dxttg" podStartSLOduration=6.251764309 podStartE2EDuration="16.517749634s" podCreationTimestamp="2025-12-16 12:24:26 +0000 UTC" firstStartedPulling="2025-12-16 12:24:27.396385613 +0000 UTC m=+8.122831832" lastFinishedPulling="2025-12-16 12:24:37.662370858 +0000 UTC m=+18.388817157" observedRunningTime="2025-12-16 12:24:42.517405074 +0000 UTC m=+23.243851373" watchObservedRunningTime="2025-12-16 12:24:42.517749634 +0000 UTC m=+23.244195893" Dec 16 12:24:43.767299 systemd-networkd[1440]: cilium_host: Link UP Dec 16 12:24:43.767510 systemd-networkd[1440]: cilium_net: Link UP Dec 16 12:24:43.767709 systemd-networkd[1440]: cilium_net: Gained carrier Dec 16 12:24:43.767894 systemd-networkd[1440]: cilium_host: Gained carrier Dec 16 12:24:43.869068 systemd-networkd[1440]: cilium_vxlan: Link UP Dec 16 12:24:43.869074 systemd-networkd[1440]: cilium_vxlan: Gained carrier Dec 16 12:24:43.870148 systemd-networkd[1440]: cilium_host: Gained IPv6LL Dec 16 12:24:44.168074 kernel: NET: Registered PF_ALG protocol family Dec 16 12:24:44.415202 systemd-networkd[1440]: cilium_net: Gained IPv6LL Dec 16 12:24:44.853744 systemd-networkd[1440]: lxc_health: Link UP Dec 16 12:24:44.854086 systemd-networkd[1440]: lxc_health: Gained carrier Dec 16 12:24:45.199018 systemd-networkd[1440]: lxc67a3d7b52e3d: Link UP Dec 16 12:24:45.200716 systemd-networkd[1440]: lxc5affa2aa81b0: Link UP Dec 16 12:24:45.201062 kernel: eth0: renamed from tmp1ef72 Dec 16 12:24:45.211379 kernel: eth0: renamed from tmp945da Dec 16 12:24:45.211721 systemd-networkd[1440]: lxc67a3d7b52e3d: Gained carrier Dec 16 12:24:45.212130 systemd-networkd[1440]: lxc5affa2aa81b0: Gained carrier Dec 16 12:24:45.887827 systemd-networkd[1440]: cilium_vxlan: Gained IPv6LL Dec 16 12:24:46.078232 systemd-networkd[1440]: lxc_health: Gained IPv6LL Dec 16 12:24:46.526285 systemd-networkd[1440]: lxc5affa2aa81b0: Gained IPv6LL Dec 16 12:24:47.039287 systemd-networkd[1440]: lxc67a3d7b52e3d: Gained IPv6LL Dec 16 12:24:49.001793 systemd[1]: Started sshd@7-10.0.0.37:22-10.0.0.1:45752.service - OpenSSH per-connection server daemon (10.0.0.1:45752). Dec 16 12:24:49.059239 sshd[3881]: Accepted publickey for core from 10.0.0.1 port 45752 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:24:49.060644 sshd-session[3881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:24:49.066127 systemd-logind[1519]: New session 8 of user core. Dec 16 12:24:49.077431 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:24:49.224722 sshd[3884]: Connection closed by 10.0.0.1 port 45752 Dec 16 12:24:49.225317 sshd-session[3881]: pam_unix(sshd:session): session closed for user core Dec 16 12:24:49.232496 systemd[1]: sshd@7-10.0.0.37:22-10.0.0.1:45752.service: Deactivated successfully. Dec 16 12:24:49.234766 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:24:49.237214 systemd-logind[1519]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:24:49.239304 systemd-logind[1519]: Removed session 8. Dec 16 12:24:49.374916 containerd[1545]: time="2025-12-16T12:24:49.374763372Z" level=info msg="connecting to shim 945da6caabe9748c084c901786800477f7701d7918afe41d4831b0353a584015" address="unix:///run/containerd/s/8fea9ee2de69ad939f0c9e079c62bf59df7fb59cdf0e08e22b6aa415d8378133" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:24:49.400138 containerd[1545]: time="2025-12-16T12:24:49.400016076Z" level=info msg="connecting to shim 1ef7200c7f66c8203fac6323ced9d4ad89fd06c6fdf0904ef6b20a83139193d9" address="unix:///run/containerd/s/c22d8d7472f90ac26ca84b9fabba7a82119fcaffc80e7ba57fc2d8d81645ee0a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:24:49.417245 systemd[1]: Started cri-containerd-945da6caabe9748c084c901786800477f7701d7918afe41d4831b0353a584015.scope - libcontainer container 945da6caabe9748c084c901786800477f7701d7918afe41d4831b0353a584015. Dec 16 12:24:49.422259 systemd[1]: Started cri-containerd-1ef7200c7f66c8203fac6323ced9d4ad89fd06c6fdf0904ef6b20a83139193d9.scope - libcontainer container 1ef7200c7f66c8203fac6323ced9d4ad89fd06c6fdf0904ef6b20a83139193d9. Dec 16 12:24:49.434841 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:24:49.439755 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:24:49.464481 containerd[1545]: time="2025-12-16T12:24:49.464435977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wzptr,Uid:c27ae4d2-4265-4edf-8137-be5094081858,Namespace:kube-system,Attempt:0,} returns sandbox id \"945da6caabe9748c084c901786800477f7701d7918afe41d4831b0353a584015\"" Dec 16 12:24:49.467292 containerd[1545]: time="2025-12-16T12:24:49.467236380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kmnn6,Uid:c4e078c1-19fe-4b1e-be3b-b4490714de93,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ef7200c7f66c8203fac6323ced9d4ad89fd06c6fdf0904ef6b20a83139193d9\"" Dec 16 12:24:49.471384 containerd[1545]: time="2025-12-16T12:24:49.471206624Z" level=info msg="CreateContainer within sandbox \"945da6caabe9748c084c901786800477f7701d7918afe41d4831b0353a584015\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:24:49.472865 containerd[1545]: time="2025-12-16T12:24:49.472811025Z" level=info msg="CreateContainer within sandbox \"1ef7200c7f66c8203fac6323ced9d4ad89fd06c6fdf0904ef6b20a83139193d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:24:49.486420 containerd[1545]: time="2025-12-16T12:24:49.486371158Z" level=info msg="Container fd6d92f78e9724faf138521d493a8a1bfd888a773e3e721a5ba7975fa7396fb6: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:24:49.488274 containerd[1545]: time="2025-12-16T12:24:49.488243080Z" level=info msg="Container 63d7854cc66b1d9fe3649176c42680a2031512f7b963b503af156d9884e49978: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:24:49.497714 containerd[1545]: time="2025-12-16T12:24:49.497670729Z" level=info msg="CreateContainer within sandbox \"945da6caabe9748c084c901786800477f7701d7918afe41d4831b0353a584015\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fd6d92f78e9724faf138521d493a8a1bfd888a773e3e721a5ba7975fa7396fb6\"" Dec 16 12:24:49.498509 containerd[1545]: time="2025-12-16T12:24:49.498357010Z" level=info msg="StartContainer for \"fd6d92f78e9724faf138521d493a8a1bfd888a773e3e721a5ba7975fa7396fb6\"" Dec 16 12:24:49.499561 containerd[1545]: time="2025-12-16T12:24:49.499491971Z" level=info msg="connecting to shim fd6d92f78e9724faf138521d493a8a1bfd888a773e3e721a5ba7975fa7396fb6" address="unix:///run/containerd/s/8fea9ee2de69ad939f0c9e079c62bf59df7fb59cdf0e08e22b6aa415d8378133" protocol=ttrpc version=3 Dec 16 12:24:49.500841 containerd[1545]: time="2025-12-16T12:24:49.500804772Z" level=info msg="CreateContainer within sandbox \"1ef7200c7f66c8203fac6323ced9d4ad89fd06c6fdf0904ef6b20a83139193d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"63d7854cc66b1d9fe3649176c42680a2031512f7b963b503af156d9884e49978\"" Dec 16 12:24:49.502135 containerd[1545]: time="2025-12-16T12:24:49.502101453Z" level=info msg="StartContainer for \"63d7854cc66b1d9fe3649176c42680a2031512f7b963b503af156d9884e49978\"" Dec 16 12:24:49.506364 containerd[1545]: time="2025-12-16T12:24:49.506313417Z" level=info msg="connecting to shim 63d7854cc66b1d9fe3649176c42680a2031512f7b963b503af156d9884e49978" address="unix:///run/containerd/s/c22d8d7472f90ac26ca84b9fabba7a82119fcaffc80e7ba57fc2d8d81645ee0a" protocol=ttrpc version=3 Dec 16 12:24:49.533258 systemd[1]: Started cri-containerd-fd6d92f78e9724faf138521d493a8a1bfd888a773e3e721a5ba7975fa7396fb6.scope - libcontainer container fd6d92f78e9724faf138521d493a8a1bfd888a773e3e721a5ba7975fa7396fb6. Dec 16 12:24:49.537652 systemd[1]: Started cri-containerd-63d7854cc66b1d9fe3649176c42680a2031512f7b963b503af156d9884e49978.scope - libcontainer container 63d7854cc66b1d9fe3649176c42680a2031512f7b963b503af156d9884e49978. Dec 16 12:24:49.575536 containerd[1545]: time="2025-12-16T12:24:49.575489003Z" level=info msg="StartContainer for \"fd6d92f78e9724faf138521d493a8a1bfd888a773e3e721a5ba7975fa7396fb6\" returns successfully" Dec 16 12:24:49.581295 containerd[1545]: time="2025-12-16T12:24:49.581253328Z" level=info msg="StartContainer for \"63d7854cc66b1d9fe3649176c42680a2031512f7b963b503af156d9884e49978\" returns successfully" Dec 16 12:24:50.554863 kubelet[2697]: I1216 12:24:50.554185 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wzptr" podStartSLOduration=23.554165939 podStartE2EDuration="23.554165939s" podCreationTimestamp="2025-12-16 12:24:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:24:50.552883938 +0000 UTC m=+31.279330197" watchObservedRunningTime="2025-12-16 12:24:50.554165939 +0000 UTC m=+31.280612198" Dec 16 12:24:54.241082 systemd[1]: Started sshd@8-10.0.0.37:22-10.0.0.1:40226.service - OpenSSH per-connection server daemon (10.0.0.1:40226). Dec 16 12:24:54.304180 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 40226 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:24:54.305926 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:24:54.311000 systemd-logind[1519]: New session 9 of user core. Dec 16 12:24:54.323471 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:24:54.472433 sshd[4071]: Connection closed by 10.0.0.1 port 40226 Dec 16 12:24:54.472740 sshd-session[4068]: pam_unix(sshd:session): session closed for user core Dec 16 12:24:54.478046 systemd[1]: sshd@8-10.0.0.37:22-10.0.0.1:40226.service: Deactivated successfully. Dec 16 12:24:54.480965 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:24:54.482804 systemd-logind[1519]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:24:54.485077 systemd-logind[1519]: Removed session 9. Dec 16 12:24:59.491431 systemd[1]: Started sshd@9-10.0.0.37:22-10.0.0.1:40234.service - OpenSSH per-connection server daemon (10.0.0.1:40234). Dec 16 12:24:59.545501 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 40234 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:24:59.546942 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:24:59.551843 systemd-logind[1519]: New session 10 of user core. Dec 16 12:24:59.561297 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:24:59.687293 sshd[4090]: Connection closed by 10.0.0.1 port 40234 Dec 16 12:24:59.688137 sshd-session[4087]: pam_unix(sshd:session): session closed for user core Dec 16 12:24:59.692415 systemd[1]: sshd@9-10.0.0.37:22-10.0.0.1:40234.service: Deactivated successfully. Dec 16 12:24:59.694286 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:24:59.695404 systemd-logind[1519]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:24:59.697954 systemd-logind[1519]: Removed session 10. Dec 16 12:25:04.710727 systemd[1]: Started sshd@10-10.0.0.37:22-10.0.0.1:41020.service - OpenSSH per-connection server daemon (10.0.0.1:41020). Dec 16 12:25:04.792245 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 41020 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:25:04.795690 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:04.802213 systemd-logind[1519]: New session 11 of user core. Dec 16 12:25:04.815306 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:25:04.953787 sshd[4110]: Connection closed by 10.0.0.1 port 41020 Dec 16 12:25:04.954250 sshd-session[4107]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:04.958539 systemd[1]: sshd@10-10.0.0.37:22-10.0.0.1:41020.service: Deactivated successfully. Dec 16 12:25:04.961013 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:25:04.963960 systemd-logind[1519]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:25:04.965018 systemd-logind[1519]: Removed session 11. Dec 16 12:25:09.972507 systemd[1]: Started sshd@11-10.0.0.37:22-10.0.0.1:41028.service - OpenSSH per-connection server daemon (10.0.0.1:41028). Dec 16 12:25:10.035871 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 41028 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:25:10.037756 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:10.042783 systemd-logind[1519]: New session 12 of user core. Dec 16 12:25:10.060326 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:25:10.197292 sshd[4128]: Connection closed by 10.0.0.1 port 41028 Dec 16 12:25:10.197604 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:10.207841 systemd[1]: sshd@11-10.0.0.37:22-10.0.0.1:41028.service: Deactivated successfully. Dec 16 12:25:10.210463 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:25:10.212865 systemd-logind[1519]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:25:10.218688 systemd[1]: Started sshd@12-10.0.0.37:22-10.0.0.1:41032.service - OpenSSH per-connection server daemon (10.0.0.1:41032). Dec 16 12:25:10.221461 systemd-logind[1519]: Removed session 12. Dec 16 12:25:10.279687 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 41032 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:25:10.281677 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:10.287062 systemd-logind[1519]: New session 13 of user core. Dec 16 12:25:10.298315 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:25:10.486792 sshd[4147]: Connection closed by 10.0.0.1 port 41032 Dec 16 12:25:10.488096 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:10.497737 systemd[1]: sshd@12-10.0.0.37:22-10.0.0.1:41032.service: Deactivated successfully. Dec 16 12:25:10.499694 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:25:10.502688 systemd-logind[1519]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:25:10.505598 systemd[1]: Started sshd@13-10.0.0.37:22-10.0.0.1:41034.service - OpenSSH per-connection server daemon (10.0.0.1:41034). Dec 16 12:25:10.510373 systemd-logind[1519]: Removed session 13. Dec 16 12:25:10.577116 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 41034 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:25:10.578728 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:10.583711 systemd-logind[1519]: New session 14 of user core. Dec 16 12:25:10.600280 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:25:10.739181 sshd[4162]: Connection closed by 10.0.0.1 port 41034 Dec 16 12:25:10.739752 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:10.743791 systemd[1]: sshd@13-10.0.0.37:22-10.0.0.1:41034.service: Deactivated successfully. Dec 16 12:25:10.746833 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:25:10.748594 systemd-logind[1519]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:25:10.750173 systemd-logind[1519]: Removed session 14. Dec 16 12:25:15.756366 systemd[1]: Started sshd@14-10.0.0.37:22-10.0.0.1:41462.service - OpenSSH per-connection server daemon (10.0.0.1:41462). Dec 16 12:25:15.821398 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 41462 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:25:15.823399 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:15.828957 systemd-logind[1519]: New session 15 of user core. Dec 16 12:25:15.840241 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:25:15.982109 sshd[4179]: Connection closed by 10.0.0.1 port 41462 Dec 16 12:25:15.982681 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:15.987668 systemd[1]: sshd@14-10.0.0.37:22-10.0.0.1:41462.service: Deactivated successfully. Dec 16 12:25:15.989621 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:25:15.991344 systemd-logind[1519]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:25:15.993639 systemd-logind[1519]: Removed session 15. Dec 16 12:25:20.999291 systemd[1]: Started sshd@15-10.0.0.37:22-10.0.0.1:37354.service - OpenSSH per-connection server daemon (10.0.0.1:37354). Dec 16 12:25:21.057672 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 37354 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:25:21.059623 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:21.066458 systemd-logind[1519]: New session 16 of user core. Dec 16 12:25:21.078288 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:25:21.227973 sshd[4198]: Connection closed by 10.0.0.1 port 37354 Dec 16 12:25:21.230284 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:21.244918 systemd[1]: sshd@15-10.0.0.37:22-10.0.0.1:37354.service: Deactivated successfully. Dec 16 12:25:21.248332 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:25:21.258615 systemd-logind[1519]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:25:21.260078 systemd[1]: Started sshd@16-10.0.0.37:22-10.0.0.1:37366.service - OpenSSH per-connection server daemon (10.0.0.1:37366). Dec 16 12:25:21.271642 systemd-logind[1519]: Removed session 16. Dec 16 12:25:21.324223 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 37366 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:25:21.325178 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:21.331378 systemd-logind[1519]: New session 17 of user core. Dec 16 12:25:21.341442 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:25:21.568169 sshd[4214]: Connection closed by 10.0.0.1 port 37366 Dec 16 12:25:21.568847 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:21.578660 systemd[1]: sshd@16-10.0.0.37:22-10.0.0.1:37366.service: Deactivated successfully. Dec 16 12:25:21.580997 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:25:21.581858 systemd-logind[1519]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:25:21.585007 systemd[1]: Started sshd@17-10.0.0.37:22-10.0.0.1:37370.service - OpenSSH per-connection server daemon (10.0.0.1:37370). Dec 16 12:25:21.586083 systemd-logind[1519]: Removed session 17. Dec 16 12:25:21.654128 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 37370 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:25:21.655564 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:21.662153 systemd-logind[1519]: New session 18 of user core. Dec 16 12:25:21.672286 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:25:22.314465 sshd[4229]: Connection closed by 10.0.0.1 port 37370 Dec 16 12:25:22.314934 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:22.333408 systemd[1]: sshd@17-10.0.0.37:22-10.0.0.1:37370.service: Deactivated successfully. Dec 16 12:25:22.338363 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:25:22.340206 systemd-logind[1519]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:25:22.343424 systemd-logind[1519]: Removed session 18. Dec 16 12:25:22.345729 systemd[1]: Started sshd@18-10.0.0.37:22-10.0.0.1:37372.service - OpenSSH per-connection server daemon (10.0.0.1:37372). Dec 16 12:25:22.401474 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 37372 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:25:22.402998 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:22.407792 systemd-logind[1519]: New session 19 of user core. Dec 16 12:25:22.424412 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:25:22.695087 sshd[4249]: Connection closed by 10.0.0.1 port 37372 Dec 16 12:25:22.695741 sshd-session[4246]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:22.708309 systemd[1]: sshd@18-10.0.0.37:22-10.0.0.1:37372.service: Deactivated successfully. Dec 16 12:25:22.711689 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:25:22.714330 systemd-logind[1519]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:25:22.723615 systemd[1]: Started sshd@19-10.0.0.37:22-10.0.0.1:37376.service - OpenSSH per-connection server daemon (10.0.0.1:37376). Dec 16 12:25:22.724439 systemd-logind[1519]: Removed session 19. Dec 16 12:25:22.789995 sshd[4260]: Accepted publickey for core from 10.0.0.1 port 37376 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:25:22.792411 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:22.797810 systemd-logind[1519]: New session 20 of user core. Dec 16 12:25:22.813281 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:25:22.935518 sshd[4263]: Connection closed by 10.0.0.1 port 37376 Dec 16 12:25:22.935862 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:22.940230 systemd[1]: sshd@19-10.0.0.37:22-10.0.0.1:37376.service: Deactivated successfully. Dec 16 12:25:22.942843 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:25:22.943956 systemd-logind[1519]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:25:22.945849 systemd-logind[1519]: Removed session 20. Dec 16 12:25:27.949447 systemd[1]: Started sshd@20-10.0.0.37:22-10.0.0.1:37378.service - OpenSSH per-connection server daemon (10.0.0.1:37378). Dec 16 12:25:28.017252 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 37378 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:25:28.018743 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:28.026425 systemd-logind[1519]: New session 21 of user core. Dec 16 12:25:28.046286 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:25:28.164099 sshd[4287]: Connection closed by 10.0.0.1 port 37378 Dec 16 12:25:28.164334 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:28.168350 systemd[1]: sshd@20-10.0.0.37:22-10.0.0.1:37378.service: Deactivated successfully. Dec 16 12:25:28.170236 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:25:28.171448 systemd-logind[1519]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:25:28.172404 systemd-logind[1519]: Removed session 21. Dec 16 12:25:33.184737 systemd[1]: Started sshd@21-10.0.0.37:22-10.0.0.1:58236.service - OpenSSH per-connection server daemon (10.0.0.1:58236). Dec 16 12:25:33.255401 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 58236 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:25:33.257797 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:33.263261 systemd-logind[1519]: New session 22 of user core. Dec 16 12:25:33.273400 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:25:33.422642 sshd[4303]: Connection closed by 10.0.0.1 port 58236 Dec 16 12:25:33.423504 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:33.428752 systemd[1]: sshd@21-10.0.0.37:22-10.0.0.1:58236.service: Deactivated successfully. Dec 16 12:25:33.433064 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:25:33.435190 systemd-logind[1519]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:25:33.436583 systemd-logind[1519]: Removed session 22. Dec 16 12:25:38.437483 systemd[1]: Started sshd@22-10.0.0.37:22-10.0.0.1:58246.service - OpenSSH per-connection server daemon (10.0.0.1:58246). Dec 16 12:25:38.525776 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 58246 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:25:38.528799 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:38.535986 systemd-logind[1519]: New session 23 of user core. Dec 16 12:25:38.556363 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:25:38.686614 sshd[4320]: Connection closed by 10.0.0.1 port 58246 Dec 16 12:25:38.687007 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:38.694876 systemd[1]: sshd@22-10.0.0.37:22-10.0.0.1:58246.service: Deactivated successfully. Dec 16 12:25:38.696800 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:25:38.698700 systemd-logind[1519]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:25:38.701804 systemd[1]: Started sshd@23-10.0.0.37:22-10.0.0.1:58262.service - OpenSSH per-connection server daemon (10.0.0.1:58262). Dec 16 12:25:38.702984 systemd-logind[1519]: Removed session 23. Dec 16 12:25:38.769573 sshd[4333]: Accepted publickey for core from 10.0.0.1 port 58262 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:25:38.771017 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:38.777396 systemd-logind[1519]: New session 24 of user core. Dec 16 12:25:38.787360 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:25:41.131277 kubelet[2697]: I1216 12:25:41.131195 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kmnn6" podStartSLOduration=74.131174888 podStartE2EDuration="1m14.131174888s" podCreationTimestamp="2025-12-16 12:24:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:24:50.606430185 +0000 UTC m=+31.332876444" watchObservedRunningTime="2025-12-16 12:25:41.131174888 +0000 UTC m=+81.857621187" Dec 16 12:25:41.157317 containerd[1545]: time="2025-12-16T12:25:41.156762868Z" level=info msg="StopContainer for \"3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742\" with timeout 30 (s)" Dec 16 12:25:41.161078 containerd[1545]: time="2025-12-16T12:25:41.160218607Z" level=info msg="Stop container \"3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742\" with signal terminated" Dec 16 12:25:41.178599 systemd[1]: cri-containerd-3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742.scope: Deactivated successfully. Dec 16 12:25:41.184395 containerd[1545]: time="2025-12-16T12:25:41.184272298Z" level=info msg="received container exit event container_id:\"3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742\" id:\"3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742\" pid:3260 exited_at:{seconds:1765887941 nanos:183990177}" Dec 16 12:25:41.193747 containerd[1545]: time="2025-12-16T12:25:41.193681190Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:25:41.221472 containerd[1545]: time="2025-12-16T12:25:41.221232860Z" level=info msg="StopContainer for \"3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f\" with timeout 2 (s)" Dec 16 12:25:41.223060 containerd[1545]: time="2025-12-16T12:25:41.222945909Z" level=info msg="Stop container \"3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f\" with signal terminated" Dec 16 12:25:41.228400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742-rootfs.mount: Deactivated successfully. Dec 16 12:25:41.235199 systemd-networkd[1440]: lxc_health: Link DOWN Dec 16 12:25:41.235209 systemd-networkd[1440]: lxc_health: Lost carrier Dec 16 12:25:41.246070 containerd[1545]: time="2025-12-16T12:25:41.245242591Z" level=info msg="StopContainer for \"3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742\" returns successfully" Dec 16 12:25:41.250669 systemd[1]: cri-containerd-3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f.scope: Deactivated successfully. Dec 16 12:25:41.251059 systemd[1]: cri-containerd-3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f.scope: Consumed 7.175s CPU time, 123M memory peak, 136K read from disk, 12.9M written to disk. Dec 16 12:25:41.251839 containerd[1545]: time="2025-12-16T12:25:41.251778147Z" level=info msg="StopPodSandbox for \"787a7489d25c4ff9bdffda824b2367a8381aa6029e7ebc22091f2b8f1f97b4c6\"" Dec 16 12:25:41.252113 containerd[1545]: time="2025-12-16T12:25:41.252051028Z" level=info msg="Container to stop \"3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:25:41.253135 containerd[1545]: time="2025-12-16T12:25:41.252690152Z" level=info msg="received container exit event container_id:\"3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f\" id:\"3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f\" pid:3370 exited_at:{seconds:1765887941 nanos:252194749}" Dec 16 12:25:41.270239 systemd[1]: cri-containerd-787a7489d25c4ff9bdffda824b2367a8381aa6029e7ebc22091f2b8f1f97b4c6.scope: Deactivated successfully. Dec 16 12:25:41.279222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f-rootfs.mount: Deactivated successfully. Dec 16 12:25:41.279633 containerd[1545]: time="2025-12-16T12:25:41.279591219Z" level=info msg="received sandbox exit event container_id:\"787a7489d25c4ff9bdffda824b2367a8381aa6029e7ebc22091f2b8f1f97b4c6\" id:\"787a7489d25c4ff9bdffda824b2367a8381aa6029e7ebc22091f2b8f1f97b4c6\" exit_status:137 exited_at:{seconds:1765887941 nanos:279356458}" monitor_name=podsandbox Dec 16 12:25:41.294473 containerd[1545]: time="2025-12-16T12:25:41.294345980Z" level=info msg="StopContainer for \"3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f\" returns successfully" Dec 16 12:25:41.295352 containerd[1545]: time="2025-12-16T12:25:41.295181064Z" level=info msg="StopPodSandbox for \"cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312\"" Dec 16 12:25:41.295352 containerd[1545]: time="2025-12-16T12:25:41.295288225Z" level=info msg="Container to stop \"a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:25:41.295352 containerd[1545]: time="2025-12-16T12:25:41.295302385Z" level=info msg="Container to stop \"5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:25:41.295352 containerd[1545]: time="2025-12-16T12:25:41.295311625Z" level=info msg="Container to stop \"06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:25:41.295352 containerd[1545]: time="2025-12-16T12:25:41.295323625Z" level=info msg="Container to stop \"496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:25:41.295352 containerd[1545]: time="2025-12-16T12:25:41.295333625Z" level=info msg="Container to stop \"3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:25:41.305590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-787a7489d25c4ff9bdffda824b2367a8381aa6029e7ebc22091f2b8f1f97b4c6-rootfs.mount: Deactivated successfully. Dec 16 12:25:41.306457 systemd[1]: cri-containerd-cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312.scope: Deactivated successfully. Dec 16 12:25:41.307889 containerd[1545]: time="2025-12-16T12:25:41.307848013Z" level=info msg="received sandbox exit event container_id:\"cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312\" id:\"cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312\" exit_status:137 exited_at:{seconds:1765887941 nanos:306689527}" monitor_name=podsandbox Dec 16 12:25:41.327171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312-rootfs.mount: Deactivated successfully. Dec 16 12:25:41.336885 containerd[1545]: time="2025-12-16T12:25:41.336824052Z" level=info msg="shim disconnected" id=787a7489d25c4ff9bdffda824b2367a8381aa6029e7ebc22091f2b8f1f97b4c6 namespace=k8s.io Dec 16 12:25:41.351096 containerd[1545]: time="2025-12-16T12:25:41.336857172Z" level=warning msg="cleaning up after shim disconnected" id=787a7489d25c4ff9bdffda824b2367a8381aa6029e7ebc22091f2b8f1f97b4c6 namespace=k8s.io Dec 16 12:25:41.351239 containerd[1545]: time="2025-12-16T12:25:41.351102610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 12:25:41.351239 containerd[1545]: time="2025-12-16T12:25:41.337533736Z" level=info msg="shim disconnected" id=cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312 namespace=k8s.io Dec 16 12:25:41.351285 containerd[1545]: time="2025-12-16T12:25:41.351220450Z" level=warning msg="cleaning up after shim disconnected" id=cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312 namespace=k8s.io Dec 16 12:25:41.351285 containerd[1545]: time="2025-12-16T12:25:41.351248731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 12:25:41.368921 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-787a7489d25c4ff9bdffda824b2367a8381aa6029e7ebc22091f2b8f1f97b4c6-shm.mount: Deactivated successfully. Dec 16 12:25:41.369053 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312-shm.mount: Deactivated successfully. Dec 16 12:25:41.369229 containerd[1545]: time="2025-12-16T12:25:41.368951267Z" level=info msg="TearDown network for sandbox \"cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312\" successfully" Dec 16 12:25:41.369229 containerd[1545]: time="2025-12-16T12:25:41.368984427Z" level=info msg="StopPodSandbox for \"cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312\" returns successfully" Dec 16 12:25:41.369290 containerd[1545]: time="2025-12-16T12:25:41.369264109Z" level=info msg="TearDown network for sandbox \"787a7489d25c4ff9bdffda824b2367a8381aa6029e7ebc22091f2b8f1f97b4c6\" successfully" Dec 16 12:25:41.369312 containerd[1545]: time="2025-12-16T12:25:41.369289549Z" level=info msg="StopPodSandbox for \"787a7489d25c4ff9bdffda824b2367a8381aa6029e7ebc22091f2b8f1f97b4c6\" returns successfully" Dec 16 12:25:41.376187 containerd[1545]: time="2025-12-16T12:25:41.376139787Z" level=info msg="received sandbox container exit event sandbox_id:\"cde25f0f73cccb53aadaf9bf14eeebd4f80ce7cb6b857398185af594206d1312\" exit_status:137 exited_at:{seconds:1765887941 nanos:306689527}" monitor_name=criService Dec 16 12:25:41.377737 containerd[1545]: time="2025-12-16T12:25:41.377704875Z" level=info msg="received sandbox container exit event sandbox_id:\"787a7489d25c4ff9bdffda824b2367a8381aa6029e7ebc22091f2b8f1f97b4c6\" exit_status:137 exited_at:{seconds:1765887941 nanos:279356458}" monitor_name=criService Dec 16 12:25:41.444675 kubelet[2697]: I1216 12:25:41.444008 2697 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd123cd4-53e4-479e-a35b-b4335c79f686-hubble-tls\") pod \"dd123cd4-53e4-479e-a35b-b4335c79f686\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " Dec 16 12:25:41.444675 kubelet[2697]: I1216 12:25:41.444349 2697 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-bpf-maps\") pod \"dd123cd4-53e4-479e-a35b-b4335c79f686\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " Dec 16 12:25:41.444675 kubelet[2697]: I1216 12:25:41.444369 2697 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-lib-modules\") pod \"dd123cd4-53e4-479e-a35b-b4335c79f686\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " Dec 16 12:25:41.444675 kubelet[2697]: I1216 12:25:41.444390 2697 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd123cd4-53e4-479e-a35b-b4335c79f686-clustermesh-secrets\") pod \"dd123cd4-53e4-479e-a35b-b4335c79f686\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " Dec 16 12:25:41.444675 kubelet[2697]: I1216 12:25:41.444410 2697 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/750e6e3b-7377-448e-a5a9-554b82c83939-cilium-config-path\") pod \"750e6e3b-7377-448e-a5a9-554b82c83939\" (UID: \"750e6e3b-7377-448e-a5a9-554b82c83939\") " Dec 16 12:25:41.444675 kubelet[2697]: I1216 12:25:41.444428 2697 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-host-proc-sys-kernel\") pod \"dd123cd4-53e4-479e-a35b-b4335c79f686\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " Dec 16 12:25:41.444934 kubelet[2697]: I1216 12:25:41.444445 2697 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-etc-cni-netd\") pod \"dd123cd4-53e4-479e-a35b-b4335c79f686\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " Dec 16 12:25:41.444934 kubelet[2697]: I1216 12:25:41.444461 2697 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t48mx\" (UniqueName: \"kubernetes.io/projected/dd123cd4-53e4-479e-a35b-b4335c79f686-kube-api-access-t48mx\") pod \"dd123cd4-53e4-479e-a35b-b4335c79f686\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " Dec 16 12:25:41.444934 kubelet[2697]: I1216 12:25:41.444481 2697 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-cilium-cgroup\") pod \"dd123cd4-53e4-479e-a35b-b4335c79f686\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " Dec 16 12:25:41.444934 kubelet[2697]: I1216 12:25:41.444497 2697 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2mmx\" (UniqueName: \"kubernetes.io/projected/750e6e3b-7377-448e-a5a9-554b82c83939-kube-api-access-d2mmx\") pod \"750e6e3b-7377-448e-a5a9-554b82c83939\" (UID: \"750e6e3b-7377-448e-a5a9-554b82c83939\") " Dec 16 12:25:41.444934 kubelet[2697]: I1216 12:25:41.444516 2697 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-host-proc-sys-net\") pod \"dd123cd4-53e4-479e-a35b-b4335c79f686\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " Dec 16 12:25:41.444934 kubelet[2697]: I1216 12:25:41.444531 2697 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd123cd4-53e4-479e-a35b-b4335c79f686-cilium-config-path\") pod \"dd123cd4-53e4-479e-a35b-b4335c79f686\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " Dec 16 12:25:41.445070 kubelet[2697]: I1216 12:25:41.444576 2697 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-xtables-lock\") pod \"dd123cd4-53e4-479e-a35b-b4335c79f686\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " Dec 16 12:25:41.445070 kubelet[2697]: I1216 12:25:41.444592 2697 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-cilium-run\") pod \"dd123cd4-53e4-479e-a35b-b4335c79f686\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " Dec 16 12:25:41.445070 kubelet[2697]: I1216 12:25:41.444608 2697 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-hostproc\") pod \"dd123cd4-53e4-479e-a35b-b4335c79f686\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " Dec 16 12:25:41.445070 kubelet[2697]: I1216 12:25:41.444620 2697 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-cni-path\") pod \"dd123cd4-53e4-479e-a35b-b4335c79f686\" (UID: \"dd123cd4-53e4-479e-a35b-b4335c79f686\") " Dec 16 12:25:41.447228 kubelet[2697]: I1216 12:25:41.447193 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dd123cd4-53e4-479e-a35b-b4335c79f686" (UID: "dd123cd4-53e4-479e-a35b-b4335c79f686"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:25:41.447477 kubelet[2697]: I1216 12:25:41.447434 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dd123cd4-53e4-479e-a35b-b4335c79f686" (UID: "dd123cd4-53e4-479e-a35b-b4335c79f686"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:25:41.447550 kubelet[2697]: I1216 12:25:41.447493 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dd123cd4-53e4-479e-a35b-b4335c79f686" (UID: "dd123cd4-53e4-479e-a35b-b4335c79f686"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:25:41.448194 kubelet[2697]: I1216 12:25:41.448163 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-cni-path" (OuterVolumeSpecName: "cni-path") pod "dd123cd4-53e4-479e-a35b-b4335c79f686" (UID: "dd123cd4-53e4-479e-a35b-b4335c79f686"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:25:41.448320 kubelet[2697]: I1216 12:25:41.448305 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dd123cd4-53e4-479e-a35b-b4335c79f686" (UID: "dd123cd4-53e4-479e-a35b-b4335c79f686"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:25:41.448443 kubelet[2697]: I1216 12:25:41.448430 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-hostproc" (OuterVolumeSpecName: "hostproc") pod "dd123cd4-53e4-479e-a35b-b4335c79f686" (UID: "dd123cd4-53e4-479e-a35b-b4335c79f686"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:25:41.448529 kubelet[2697]: I1216 12:25:41.448517 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dd123cd4-53e4-479e-a35b-b4335c79f686" (UID: "dd123cd4-53e4-479e-a35b-b4335c79f686"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:25:41.449494 kubelet[2697]: I1216 12:25:41.449438 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dd123cd4-53e4-479e-a35b-b4335c79f686" (UID: "dd123cd4-53e4-479e-a35b-b4335c79f686"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:25:41.449567 kubelet[2697]: I1216 12:25:41.449542 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dd123cd4-53e4-479e-a35b-b4335c79f686" (UID: "dd123cd4-53e4-479e-a35b-b4335c79f686"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:25:41.449594 kubelet[2697]: I1216 12:25:41.449566 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd123cd4-53e4-479e-a35b-b4335c79f686-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dd123cd4-53e4-479e-a35b-b4335c79f686" (UID: "dd123cd4-53e4-479e-a35b-b4335c79f686"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:25:41.449640 kubelet[2697]: I1216 12:25:41.449619 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dd123cd4-53e4-479e-a35b-b4335c79f686" (UID: "dd123cd4-53e4-479e-a35b-b4335c79f686"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:25:41.450036 kubelet[2697]: I1216 12:25:41.449997 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd123cd4-53e4-479e-a35b-b4335c79f686-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dd123cd4-53e4-479e-a35b-b4335c79f686" (UID: "dd123cd4-53e4-479e-a35b-b4335c79f686"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:25:41.451190 kubelet[2697]: I1216 12:25:41.451153 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd123cd4-53e4-479e-a35b-b4335c79f686-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dd123cd4-53e4-479e-a35b-b4335c79f686" (UID: "dd123cd4-53e4-479e-a35b-b4335c79f686"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:25:41.451291 kubelet[2697]: I1216 12:25:41.451268 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750e6e3b-7377-448e-a5a9-554b82c83939-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "750e6e3b-7377-448e-a5a9-554b82c83939" (UID: "750e6e3b-7377-448e-a5a9-554b82c83939"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:25:41.451714 kubelet[2697]: I1216 12:25:41.451682 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/750e6e3b-7377-448e-a5a9-554b82c83939-kube-api-access-d2mmx" (OuterVolumeSpecName: "kube-api-access-d2mmx") pod "750e6e3b-7377-448e-a5a9-554b82c83939" (UID: "750e6e3b-7377-448e-a5a9-554b82c83939"). InnerVolumeSpecName "kube-api-access-d2mmx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:25:41.452384 kubelet[2697]: I1216 12:25:41.452348 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd123cd4-53e4-479e-a35b-b4335c79f686-kube-api-access-t48mx" (OuterVolumeSpecName: "kube-api-access-t48mx") pod "dd123cd4-53e4-479e-a35b-b4335c79f686" (UID: "dd123cd4-53e4-479e-a35b-b4335c79f686"). InnerVolumeSpecName "kube-api-access-t48mx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:25:41.544887 kubelet[2697]: I1216 12:25:41.544836 2697 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd123cd4-53e4-479e-a35b-b4335c79f686-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 16 12:25:41.544887 kubelet[2697]: I1216 12:25:41.544873 2697 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 16 12:25:41.544887 kubelet[2697]: I1216 12:25:41.544881 2697 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 16 12:25:41.544887 kubelet[2697]: I1216 12:25:41.544892 2697 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd123cd4-53e4-479e-a35b-b4335c79f686-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 16 12:25:41.544887 kubelet[2697]: I1216 12:25:41.544905 2697 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/750e6e3b-7377-448e-a5a9-554b82c83939-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 16 12:25:41.545173 kubelet[2697]: I1216 12:25:41.544912 2697 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 16 12:25:41.545173 kubelet[2697]: I1216 12:25:41.544921 2697 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 16 12:25:41.545173 kubelet[2697]: I1216 12:25:41.544931 2697 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t48mx\" (UniqueName: \"kubernetes.io/projected/dd123cd4-53e4-479e-a35b-b4335c79f686-kube-api-access-t48mx\") on node \"localhost\" DevicePath \"\"" Dec 16 12:25:41.545173 kubelet[2697]: I1216 12:25:41.544939 2697 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 16 12:25:41.545173 kubelet[2697]: I1216 12:25:41.544946 2697 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d2mmx\" (UniqueName: \"kubernetes.io/projected/750e6e3b-7377-448e-a5a9-554b82c83939-kube-api-access-d2mmx\") on node \"localhost\" DevicePath \"\"" Dec 16 12:25:41.545173 kubelet[2697]: I1216 12:25:41.544953 2697 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 16 12:25:41.545173 kubelet[2697]: I1216 12:25:41.544962 2697 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd123cd4-53e4-479e-a35b-b4335c79f686-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 16 12:25:41.545173 kubelet[2697]: I1216 12:25:41.544970 2697 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 16 12:25:41.545328 kubelet[2697]: I1216 12:25:41.544977 2697 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 16 12:25:41.545328 kubelet[2697]: I1216 12:25:41.544985 2697 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 16 12:25:41.545328 kubelet[2697]: I1216 12:25:41.544993 2697 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd123cd4-53e4-479e-a35b-b4335c79f686-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 16 12:25:41.685047 kubelet[2697]: I1216 12:25:41.684365 2697 scope.go:117] "RemoveContainer" containerID="3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f" Dec 16 12:25:41.687245 containerd[1545]: time="2025-12-16T12:25:41.687200126Z" level=info msg="RemoveContainer for \"3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f\"" Dec 16 12:25:41.688112 systemd[1]: Removed slice kubepods-burstable-poddd123cd4_53e4_479e_a35b_b4335c79f686.slice - libcontainer container kubepods-burstable-poddd123cd4_53e4_479e_a35b_b4335c79f686.slice. Dec 16 12:25:41.688240 systemd[1]: kubepods-burstable-poddd123cd4_53e4_479e_a35b_b4335c79f686.slice: Consumed 7.283s CPU time, 123.3M memory peak, 144K read from disk, 12.9M written to disk. Dec 16 12:25:41.691529 systemd[1]: Removed slice kubepods-besteffort-pod750e6e3b_7377_448e_a5a9_554b82c83939.slice - libcontainer container kubepods-besteffort-pod750e6e3b_7377_448e_a5a9_554b82c83939.slice. Dec 16 12:25:41.696191 containerd[1545]: time="2025-12-16T12:25:41.695419411Z" level=info msg="RemoveContainer for \"3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f\" returns successfully" Dec 16 12:25:41.697504 kubelet[2697]: I1216 12:25:41.696555 2697 scope.go:117] "RemoveContainer" containerID="496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea" Dec 16 12:25:41.700848 containerd[1545]: time="2025-12-16T12:25:41.700799081Z" level=info msg="RemoveContainer for \"496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea\"" Dec 16 12:25:41.710892 containerd[1545]: time="2025-12-16T12:25:41.710823855Z" level=info msg="RemoveContainer for \"496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea\" returns successfully" Dec 16 12:25:41.713010 kubelet[2697]: I1216 12:25:41.711391 2697 scope.go:117] "RemoveContainer" containerID="06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329" Dec 16 12:25:41.716554 containerd[1545]: time="2025-12-16T12:25:41.716514366Z" level=info msg="RemoveContainer for \"06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329\"" Dec 16 12:25:41.723357 containerd[1545]: time="2025-12-16T12:25:41.723293603Z" level=info msg="RemoveContainer for \"06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329\" returns successfully" Dec 16 12:25:41.723603 kubelet[2697]: I1216 12:25:41.723556 2697 scope.go:117] "RemoveContainer" containerID="a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29" Dec 16 12:25:41.726465 containerd[1545]: time="2025-12-16T12:25:41.726422141Z" level=info msg="RemoveContainer for \"a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29\"" Dec 16 12:25:41.732542 containerd[1545]: time="2025-12-16T12:25:41.732479294Z" level=info msg="RemoveContainer for \"a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29\" returns successfully" Dec 16 12:25:41.732739 kubelet[2697]: I1216 12:25:41.732709 2697 scope.go:117] "RemoveContainer" containerID="5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90" Dec 16 12:25:41.734663 containerd[1545]: time="2025-12-16T12:25:41.734603625Z" level=info msg="RemoveContainer for \"5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90\"" Dec 16 12:25:41.738508 containerd[1545]: time="2025-12-16T12:25:41.738449206Z" level=info msg="RemoveContainer for \"5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90\" returns successfully" Dec 16 12:25:41.738768 kubelet[2697]: I1216 12:25:41.738720 2697 scope.go:117] "RemoveContainer" containerID="3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f" Dec 16 12:25:41.739079 containerd[1545]: time="2025-12-16T12:25:41.739036729Z" level=error msg="ContainerStatus for \"3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f\": not found" Dec 16 12:25:41.739363 kubelet[2697]: E1216 12:25:41.739320 2697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f\": not found" containerID="3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f" Dec 16 12:25:41.739409 kubelet[2697]: I1216 12:25:41.739352 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f"} err="failed to get container status \"3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f\": rpc error: code = NotFound desc = an error occurred when try to find container \"3165dac90f150f794579cef7041516a0bc53567cde20e4a5b4543a49d6073a8f\": not found" Dec 16 12:25:41.739409 kubelet[2697]: I1216 12:25:41.739391 2697 scope.go:117] "RemoveContainer" containerID="496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea" Dec 16 12:25:41.739579 containerd[1545]: time="2025-12-16T12:25:41.739549532Z" level=error msg="ContainerStatus for \"496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea\": not found" Dec 16 12:25:41.739671 kubelet[2697]: E1216 12:25:41.739654 2697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea\": not found" containerID="496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea" Dec 16 12:25:41.739701 kubelet[2697]: I1216 12:25:41.739674 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea"} err="failed to get container status \"496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"496a71303a631196945fbca4e5e84c1aa060aba7f71f0e48fa66c3ac7ab644ea\": not found" Dec 16 12:25:41.739701 kubelet[2697]: I1216 12:25:41.739688 2697 scope.go:117] "RemoveContainer" containerID="06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329" Dec 16 12:25:41.739932 containerd[1545]: time="2025-12-16T12:25:41.739899174Z" level=error msg="ContainerStatus for \"06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329\": not found" Dec 16 12:25:41.740054 kubelet[2697]: E1216 12:25:41.740020 2697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329\": not found" containerID="06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329" Dec 16 12:25:41.740106 kubelet[2697]: I1216 12:25:41.740058 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329"} err="failed to get container status \"06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329\": rpc error: code = NotFound desc = an error occurred when try to find container \"06db3955cc458139c83943a6c833dff257371463287fa16551554ceb61b14329\": not found" Dec 16 12:25:41.740106 kubelet[2697]: I1216 12:25:41.740073 2697 scope.go:117] "RemoveContainer" containerID="a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29" Dec 16 12:25:41.740245 containerd[1545]: time="2025-12-16T12:25:41.740220736Z" level=error msg="ContainerStatus for \"a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29\": not found" Dec 16 12:25:41.740446 kubelet[2697]: E1216 12:25:41.740370 2697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29\": not found" containerID="a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29" Dec 16 12:25:41.740513 kubelet[2697]: I1216 12:25:41.740447 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29"} err="failed to get container status \"a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29\": rpc error: code = NotFound desc = an error occurred when try to find container \"a212b18637589c1f13566271065a938c8e5b1650542e05ca8d1715e88b0dea29\": not found" Dec 16 12:25:41.740513 kubelet[2697]: I1216 12:25:41.740467 2697 scope.go:117] "RemoveContainer" containerID="5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90" Dec 16 12:25:41.740742 containerd[1545]: time="2025-12-16T12:25:41.740693459Z" level=error msg="ContainerStatus for \"5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90\": not found" Dec 16 12:25:41.740829 kubelet[2697]: E1216 12:25:41.740807 2697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90\": not found" containerID="5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90" Dec 16 12:25:41.740862 kubelet[2697]: I1216 12:25:41.740832 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90"} err="failed to get container status \"5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d167fcfdde821de2c91bf66bb68548e7063c2a24aa7694e8c87a130f516ed90\": not found" Dec 16 12:25:41.740862 kubelet[2697]: I1216 12:25:41.740850 2697 scope.go:117] "RemoveContainer" containerID="3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742" Dec 16 12:25:41.743541 containerd[1545]: time="2025-12-16T12:25:41.743506954Z" level=info msg="RemoveContainer for \"3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742\"" Dec 16 12:25:41.748759 containerd[1545]: time="2025-12-16T12:25:41.748714622Z" level=info msg="RemoveContainer for \"3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742\" returns successfully" Dec 16 12:25:41.749084 kubelet[2697]: I1216 12:25:41.749057 2697 scope.go:117] "RemoveContainer" containerID="3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742" Dec 16 12:25:41.749562 containerd[1545]: time="2025-12-16T12:25:41.749479827Z" level=error msg="ContainerStatus for \"3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742\": not found" Dec 16 12:25:41.749650 kubelet[2697]: E1216 12:25:41.749629 2697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742\": not found" containerID="3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742" Dec 16 12:25:41.749685 kubelet[2697]: I1216 12:25:41.749658 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742"} err="failed to get container status \"3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742\": rpc error: code = NotFound desc = an error occurred when try to find container \"3709536f8add8b5b5c9c11a455089c093d3adeca4dc9bdc61dda059c9f1b4742\": not found" Dec 16 12:25:42.228657 systemd[1]: var-lib-kubelet-pods-750e6e3b\x2d7377\x2d448e\x2da5a9\x2d554b82c83939-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd2mmx.mount: Deactivated successfully. Dec 16 12:25:42.228755 systemd[1]: var-lib-kubelet-pods-dd123cd4\x2d53e4\x2d479e\x2da35b\x2db4335c79f686-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt48mx.mount: Deactivated successfully. Dec 16 12:25:42.228813 systemd[1]: var-lib-kubelet-pods-dd123cd4\x2d53e4\x2d479e\x2da35b\x2db4335c79f686-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 16 12:25:42.228880 systemd[1]: var-lib-kubelet-pods-dd123cd4\x2d53e4\x2d479e\x2da35b\x2db4335c79f686-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 16 12:25:43.036195 sshd[4336]: Connection closed by 10.0.0.1 port 58262 Dec 16 12:25:43.036398 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:43.049475 systemd[1]: sshd@23-10.0.0.37:22-10.0.0.1:58262.service: Deactivated successfully. Dec 16 12:25:43.051603 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:25:43.051963 systemd[1]: session-24.scope: Consumed 1.610s CPU time, 25.1M memory peak. Dec 16 12:25:43.053295 systemd-logind[1519]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:25:43.056610 systemd[1]: Started sshd@24-10.0.0.37:22-10.0.0.1:50424.service - OpenSSH per-connection server daemon (10.0.0.1:50424). Dec 16 12:25:43.057744 systemd-logind[1519]: Removed session 24. Dec 16 12:25:43.124515 sshd[4481]: Accepted publickey for core from 10.0.0.1 port 50424 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:25:43.126644 sshd-session[4481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:43.131069 systemd-logind[1519]: New session 25 of user core. Dec 16 12:25:43.144937 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:25:43.384380 kubelet[2697]: I1216 12:25:43.383957 2697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="750e6e3b-7377-448e-a5a9-554b82c83939" path="/var/lib/kubelet/pods/750e6e3b-7377-448e-a5a9-554b82c83939/volumes" Dec 16 12:25:43.386540 kubelet[2697]: I1216 12:25:43.386068 2697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd123cd4-53e4-479e-a35b-b4335c79f686" path="/var/lib/kubelet/pods/dd123cd4-53e4-479e-a35b-b4335c79f686/volumes" Dec 16 12:25:44.447490 kubelet[2697]: E1216 12:25:44.447433 2697 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 12:25:44.834250 sshd[4484]: Connection closed by 10.0.0.1 port 50424 Dec 16 12:25:44.835340 sshd-session[4481]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:44.845940 systemd[1]: sshd@24-10.0.0.37:22-10.0.0.1:50424.service: Deactivated successfully. Dec 16 12:25:44.849067 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:25:44.849431 systemd[1]: session-25.scope: Consumed 1.466s CPU time, 26.5M memory peak. Dec 16 12:25:44.854599 systemd-logind[1519]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:25:44.859728 systemd[1]: Started sshd@25-10.0.0.37:22-10.0.0.1:50426.service - OpenSSH per-connection server daemon (10.0.0.1:50426). Dec 16 12:25:44.861180 systemd-logind[1519]: Removed session 25. Dec 16 12:25:44.896047 systemd[1]: Created slice kubepods-burstable-pod76eb51c1_0517_45fa_8210_a9088cdcf758.slice - libcontainer container kubepods-burstable-pod76eb51c1_0517_45fa_8210_a9088cdcf758.slice. Dec 16 12:25:44.934805 sshd[4496]: Accepted publickey for core from 10.0.0.1 port 50426 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:25:44.936383 sshd-session[4496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:44.940813 systemd-logind[1519]: New session 26 of user core. Dec 16 12:25:44.952316 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 12:25:44.964288 kubelet[2697]: I1216 12:25:44.964235 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/76eb51c1-0517-45fa-8210-a9088cdcf758-hostproc\") pod \"cilium-twtsn\" (UID: \"76eb51c1-0517-45fa-8210-a9088cdcf758\") " pod="kube-system/cilium-twtsn" Dec 16 12:25:44.964288 kubelet[2697]: I1216 12:25:44.964281 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76eb51c1-0517-45fa-8210-a9088cdcf758-etc-cni-netd\") pod \"cilium-twtsn\" (UID: \"76eb51c1-0517-45fa-8210-a9088cdcf758\") " pod="kube-system/cilium-twtsn" Dec 16 12:25:44.964288 kubelet[2697]: I1216 12:25:44.964299 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/76eb51c1-0517-45fa-8210-a9088cdcf758-host-proc-sys-kernel\") pod \"cilium-twtsn\" (UID: \"76eb51c1-0517-45fa-8210-a9088cdcf758\") " pod="kube-system/cilium-twtsn" Dec 16 12:25:44.964704 kubelet[2697]: I1216 12:25:44.964315 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/76eb51c1-0517-45fa-8210-a9088cdcf758-hubble-tls\") pod \"cilium-twtsn\" (UID: \"76eb51c1-0517-45fa-8210-a9088cdcf758\") " pod="kube-system/cilium-twtsn" Dec 16 12:25:44.964704 kubelet[2697]: I1216 12:25:44.964376 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/76eb51c1-0517-45fa-8210-a9088cdcf758-cilium-cgroup\") pod \"cilium-twtsn\" (UID: \"76eb51c1-0517-45fa-8210-a9088cdcf758\") " pod="kube-system/cilium-twtsn" Dec 16 12:25:44.964704 kubelet[2697]: I1216 12:25:44.964420 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/76eb51c1-0517-45fa-8210-a9088cdcf758-cni-path\") pod \"cilium-twtsn\" (UID: \"76eb51c1-0517-45fa-8210-a9088cdcf758\") " pod="kube-system/cilium-twtsn" Dec 16 12:25:44.964704 kubelet[2697]: I1216 12:25:44.964444 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/76eb51c1-0517-45fa-8210-a9088cdcf758-clustermesh-secrets\") pod \"cilium-twtsn\" (UID: \"76eb51c1-0517-45fa-8210-a9088cdcf758\") " pod="kube-system/cilium-twtsn" Dec 16 12:25:44.964704 kubelet[2697]: I1216 12:25:44.964473 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/76eb51c1-0517-45fa-8210-a9088cdcf758-bpf-maps\") pod \"cilium-twtsn\" (UID: \"76eb51c1-0517-45fa-8210-a9088cdcf758\") " pod="kube-system/cilium-twtsn" Dec 16 12:25:44.964704 kubelet[2697]: I1216 12:25:44.964571 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/76eb51c1-0517-45fa-8210-a9088cdcf758-cilium-ipsec-secrets\") pod \"cilium-twtsn\" (UID: \"76eb51c1-0517-45fa-8210-a9088cdcf758\") " pod="kube-system/cilium-twtsn" Dec 16 12:25:44.964840 kubelet[2697]: I1216 12:25:44.964589 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76eb51c1-0517-45fa-8210-a9088cdcf758-lib-modules\") pod \"cilium-twtsn\" (UID: \"76eb51c1-0517-45fa-8210-a9088cdcf758\") " pod="kube-system/cilium-twtsn" Dec 16 12:25:44.964840 kubelet[2697]: I1216 12:25:44.964607 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76eb51c1-0517-45fa-8210-a9088cdcf758-cilium-config-path\") pod \"cilium-twtsn\" (UID: \"76eb51c1-0517-45fa-8210-a9088cdcf758\") " pod="kube-system/cilium-twtsn" Dec 16 12:25:44.964840 kubelet[2697]: I1216 12:25:44.964659 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdk7m\" (UniqueName: \"kubernetes.io/projected/76eb51c1-0517-45fa-8210-a9088cdcf758-kube-api-access-tdk7m\") pod \"cilium-twtsn\" (UID: \"76eb51c1-0517-45fa-8210-a9088cdcf758\") " pod="kube-system/cilium-twtsn" Dec 16 12:25:44.964840 kubelet[2697]: I1216 12:25:44.964697 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/76eb51c1-0517-45fa-8210-a9088cdcf758-cilium-run\") pod \"cilium-twtsn\" (UID: \"76eb51c1-0517-45fa-8210-a9088cdcf758\") " pod="kube-system/cilium-twtsn" Dec 16 12:25:44.964840 kubelet[2697]: I1216 12:25:44.964716 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76eb51c1-0517-45fa-8210-a9088cdcf758-xtables-lock\") pod \"cilium-twtsn\" (UID: \"76eb51c1-0517-45fa-8210-a9088cdcf758\") " pod="kube-system/cilium-twtsn" Dec 16 12:25:44.964840 kubelet[2697]: I1216 12:25:44.964731 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/76eb51c1-0517-45fa-8210-a9088cdcf758-host-proc-sys-net\") pod \"cilium-twtsn\" (UID: \"76eb51c1-0517-45fa-8210-a9088cdcf758\") " pod="kube-system/cilium-twtsn" Dec 16 12:25:45.001835 sshd[4499]: Connection closed by 10.0.0.1 port 50426 Dec 16 12:25:45.002371 sshd-session[4496]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:45.018734 systemd[1]: sshd@25-10.0.0.37:22-10.0.0.1:50426.service: Deactivated successfully. Dec 16 12:25:45.020757 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 12:25:45.021720 systemd-logind[1519]: Session 26 logged out. Waiting for processes to exit. Dec 16 12:25:45.024738 systemd[1]: Started sshd@26-10.0.0.37:22-10.0.0.1:50436.service - OpenSSH per-connection server daemon (10.0.0.1:50436). Dec 16 12:25:45.025332 systemd-logind[1519]: Removed session 26. Dec 16 12:25:45.092516 sshd[4506]: Accepted publickey for core from 10.0.0.1 port 50436 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:25:45.094235 sshd-session[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:45.098434 systemd-logind[1519]: New session 27 of user core. Dec 16 12:25:45.114266 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 12:25:45.205011 containerd[1545]: time="2025-12-16T12:25:45.204967252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-twtsn,Uid:76eb51c1-0517-45fa-8210-a9088cdcf758,Namespace:kube-system,Attempt:0,}" Dec 16 12:25:45.232629 containerd[1545]: time="2025-12-16T12:25:45.232572348Z" level=info msg="connecting to shim 4b2c9bb207ccd246065e21687d2cd68d598ff45b38a9eb46288111bddfd49b17" address="unix:///run/containerd/s/9a4284b1c5292d2224ac34134e3feaa70eca14e9275d36ad1a2450358d7177c2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:25:45.263289 systemd[1]: Started cri-containerd-4b2c9bb207ccd246065e21687d2cd68d598ff45b38a9eb46288111bddfd49b17.scope - libcontainer container 4b2c9bb207ccd246065e21687d2cd68d598ff45b38a9eb46288111bddfd49b17. Dec 16 12:25:45.288685 containerd[1545]: time="2025-12-16T12:25:45.288633265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-twtsn,Uid:76eb51c1-0517-45fa-8210-a9088cdcf758,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b2c9bb207ccd246065e21687d2cd68d598ff45b38a9eb46288111bddfd49b17\"" Dec 16 12:25:45.295995 containerd[1545]: time="2025-12-16T12:25:45.295929901Z" level=info msg="CreateContainer within sandbox \"4b2c9bb207ccd246065e21687d2cd68d598ff45b38a9eb46288111bddfd49b17\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 12:25:45.306512 containerd[1545]: time="2025-12-16T12:25:45.306450912Z" level=info msg="Container 1aa1008250fa79e91bc169db8a7bdfec6ce10717239bb67ce630384a9af920d3: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:25:45.315453 containerd[1545]: time="2025-12-16T12:25:45.315406436Z" level=info msg="CreateContainer within sandbox \"4b2c9bb207ccd246065e21687d2cd68d598ff45b38a9eb46288111bddfd49b17\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1aa1008250fa79e91bc169db8a7bdfec6ce10717239bb67ce630384a9af920d3\"" Dec 16 12:25:45.316245 containerd[1545]: time="2025-12-16T12:25:45.315962199Z" level=info msg="StartContainer for \"1aa1008250fa79e91bc169db8a7bdfec6ce10717239bb67ce630384a9af920d3\"" Dec 16 12:25:45.317285 containerd[1545]: time="2025-12-16T12:25:45.317251846Z" level=info msg="connecting to shim 1aa1008250fa79e91bc169db8a7bdfec6ce10717239bb67ce630384a9af920d3" address="unix:///run/containerd/s/9a4284b1c5292d2224ac34134e3feaa70eca14e9275d36ad1a2450358d7177c2" protocol=ttrpc version=3 Dec 16 12:25:45.333282 systemd[1]: Started cri-containerd-1aa1008250fa79e91bc169db8a7bdfec6ce10717239bb67ce630384a9af920d3.scope - libcontainer container 1aa1008250fa79e91bc169db8a7bdfec6ce10717239bb67ce630384a9af920d3. Dec 16 12:25:45.362528 containerd[1545]: time="2025-12-16T12:25:45.362402468Z" level=info msg="StartContainer for \"1aa1008250fa79e91bc169db8a7bdfec6ce10717239bb67ce630384a9af920d3\" returns successfully" Dec 16 12:25:45.373674 systemd[1]: cri-containerd-1aa1008250fa79e91bc169db8a7bdfec6ce10717239bb67ce630384a9af920d3.scope: Deactivated successfully. Dec 16 12:25:45.375307 containerd[1545]: time="2025-12-16T12:25:45.375264211Z" level=info msg="received container exit event container_id:\"1aa1008250fa79e91bc169db8a7bdfec6ce10717239bb67ce630384a9af920d3\" id:\"1aa1008250fa79e91bc169db8a7bdfec6ce10717239bb67ce630384a9af920d3\" pid:4580 exited_at:{seconds:1765887945 nanos:374451807}" Dec 16 12:25:45.702781 containerd[1545]: time="2025-12-16T12:25:45.702722665Z" level=info msg="CreateContainer within sandbox \"4b2c9bb207ccd246065e21687d2cd68d598ff45b38a9eb46288111bddfd49b17\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 12:25:45.710359 containerd[1545]: time="2025-12-16T12:25:45.710306462Z" level=info msg="Container d2a27d0eb350fc3e1cf8aeddb7ea6beb8ac0a4a5b368e27b29c01768c6e1f2e2: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:25:45.719337 containerd[1545]: time="2025-12-16T12:25:45.718344942Z" level=info msg="CreateContainer within sandbox \"4b2c9bb207ccd246065e21687d2cd68d598ff45b38a9eb46288111bddfd49b17\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d2a27d0eb350fc3e1cf8aeddb7ea6beb8ac0a4a5b368e27b29c01768c6e1f2e2\"" Dec 16 12:25:45.719924 containerd[1545]: time="2025-12-16T12:25:45.719898590Z" level=info msg="StartContainer for \"d2a27d0eb350fc3e1cf8aeddb7ea6beb8ac0a4a5b368e27b29c01768c6e1f2e2\"" Dec 16 12:25:45.721127 containerd[1545]: time="2025-12-16T12:25:45.721043595Z" level=info msg="connecting to shim d2a27d0eb350fc3e1cf8aeddb7ea6beb8ac0a4a5b368e27b29c01768c6e1f2e2" address="unix:///run/containerd/s/9a4284b1c5292d2224ac34134e3feaa70eca14e9275d36ad1a2450358d7177c2" protocol=ttrpc version=3 Dec 16 12:25:45.742280 systemd[1]: Started cri-containerd-d2a27d0eb350fc3e1cf8aeddb7ea6beb8ac0a4a5b368e27b29c01768c6e1f2e2.scope - libcontainer container d2a27d0eb350fc3e1cf8aeddb7ea6beb8ac0a4a5b368e27b29c01768c6e1f2e2. Dec 16 12:25:45.773621 containerd[1545]: time="2025-12-16T12:25:45.773488774Z" level=info msg="StartContainer for \"d2a27d0eb350fc3e1cf8aeddb7ea6beb8ac0a4a5b368e27b29c01768c6e1f2e2\" returns successfully" Dec 16 12:25:45.781813 systemd[1]: cri-containerd-d2a27d0eb350fc3e1cf8aeddb7ea6beb8ac0a4a5b368e27b29c01768c6e1f2e2.scope: Deactivated successfully. Dec 16 12:25:45.783186 containerd[1545]: time="2025-12-16T12:25:45.782845180Z" level=info msg="received container exit event container_id:\"d2a27d0eb350fc3e1cf8aeddb7ea6beb8ac0a4a5b368e27b29c01768c6e1f2e2\" id:\"d2a27d0eb350fc3e1cf8aeddb7ea6beb8ac0a4a5b368e27b29c01768c6e1f2e2\" pid:4625 exited_at:{seconds:1765887945 nanos:782270657}" Dec 16 12:25:46.712756 containerd[1545]: time="2025-12-16T12:25:46.712360832Z" level=info msg="CreateContainer within sandbox \"4b2c9bb207ccd246065e21687d2cd68d598ff45b38a9eb46288111bddfd49b17\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 12:25:46.734413 containerd[1545]: time="2025-12-16T12:25:46.733218732Z" level=info msg="Container 455b84d6f74711d7f113781edb7439db7b6dee833a1dbec0a6c7d5fa69f5412a: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:25:46.749724 containerd[1545]: time="2025-12-16T12:25:46.749645371Z" level=info msg="CreateContainer within sandbox \"4b2c9bb207ccd246065e21687d2cd68d598ff45b38a9eb46288111bddfd49b17\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"455b84d6f74711d7f113781edb7439db7b6dee833a1dbec0a6c7d5fa69f5412a\"" Dec 16 12:25:46.751928 containerd[1545]: time="2025-12-16T12:25:46.750527175Z" level=info msg="StartContainer for \"455b84d6f74711d7f113781edb7439db7b6dee833a1dbec0a6c7d5fa69f5412a\"" Dec 16 12:25:46.753589 containerd[1545]: time="2025-12-16T12:25:46.753555470Z" level=info msg="connecting to shim 455b84d6f74711d7f113781edb7439db7b6dee833a1dbec0a6c7d5fa69f5412a" address="unix:///run/containerd/s/9a4284b1c5292d2224ac34134e3feaa70eca14e9275d36ad1a2450358d7177c2" protocol=ttrpc version=3 Dec 16 12:25:46.785280 systemd[1]: Started cri-containerd-455b84d6f74711d7f113781edb7439db7b6dee833a1dbec0a6c7d5fa69f5412a.scope - libcontainer container 455b84d6f74711d7f113781edb7439db7b6dee833a1dbec0a6c7d5fa69f5412a. Dec 16 12:25:46.882175 containerd[1545]: time="2025-12-16T12:25:46.882110727Z" level=info msg="StartContainer for \"455b84d6f74711d7f113781edb7439db7b6dee833a1dbec0a6c7d5fa69f5412a\" returns successfully" Dec 16 12:25:46.883723 systemd[1]: cri-containerd-455b84d6f74711d7f113781edb7439db7b6dee833a1dbec0a6c7d5fa69f5412a.scope: Deactivated successfully. Dec 16 12:25:46.887762 containerd[1545]: time="2025-12-16T12:25:46.887676194Z" level=info msg="received container exit event container_id:\"455b84d6f74711d7f113781edb7439db7b6dee833a1dbec0a6c7d5fa69f5412a\" id:\"455b84d6f74711d7f113781edb7439db7b6dee833a1dbec0a6c7d5fa69f5412a\" pid:4668 exited_at:{seconds:1765887946 nanos:886829950}" Dec 16 12:25:46.911350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-455b84d6f74711d7f113781edb7439db7b6dee833a1dbec0a6c7d5fa69f5412a-rootfs.mount: Deactivated successfully. Dec 16 12:25:47.723343 containerd[1545]: time="2025-12-16T12:25:47.723285241Z" level=info msg="CreateContainer within sandbox \"4b2c9bb207ccd246065e21687d2cd68d598ff45b38a9eb46288111bddfd49b17\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 12:25:47.746254 containerd[1545]: time="2025-12-16T12:25:47.741500007Z" level=info msg="Container 061ff68b42787cbefc17115b271613a903d413a8046db15ddd7f4c36232d7bc4: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:25:47.755346 containerd[1545]: time="2025-12-16T12:25:47.755273671Z" level=info msg="CreateContainer within sandbox \"4b2c9bb207ccd246065e21687d2cd68d598ff45b38a9eb46288111bddfd49b17\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"061ff68b42787cbefc17115b271613a903d413a8046db15ddd7f4c36232d7bc4\"" Dec 16 12:25:47.756694 containerd[1545]: time="2025-12-16T12:25:47.756076755Z" level=info msg="StartContainer for \"061ff68b42787cbefc17115b271613a903d413a8046db15ddd7f4c36232d7bc4\"" Dec 16 12:25:47.757712 containerd[1545]: time="2025-12-16T12:25:47.757673323Z" level=info msg="connecting to shim 061ff68b42787cbefc17115b271613a903d413a8046db15ddd7f4c36232d7bc4" address="unix:///run/containerd/s/9a4284b1c5292d2224ac34134e3feaa70eca14e9275d36ad1a2450358d7177c2" protocol=ttrpc version=3 Dec 16 12:25:47.785416 systemd[1]: Started cri-containerd-061ff68b42787cbefc17115b271613a903d413a8046db15ddd7f4c36232d7bc4.scope - libcontainer container 061ff68b42787cbefc17115b271613a903d413a8046db15ddd7f4c36232d7bc4. Dec 16 12:25:47.830963 systemd[1]: cri-containerd-061ff68b42787cbefc17115b271613a903d413a8046db15ddd7f4c36232d7bc4.scope: Deactivated successfully. Dec 16 12:25:47.831444 systemd[1]: cri-containerd-061ff68b42787cbefc17115b271613a903d413a8046db15ddd7f4c36232d7bc4.scope: Consumed 22ms CPU time, 7.8M memory peak, 6.4M read from disk. Dec 16 12:25:47.832883 containerd[1545]: time="2025-12-16T12:25:47.832832715Z" level=info msg="received container exit event container_id:\"061ff68b42787cbefc17115b271613a903d413a8046db15ddd7f4c36232d7bc4\" id:\"061ff68b42787cbefc17115b271613a903d413a8046db15ddd7f4c36232d7bc4\" pid:4708 exited_at:{seconds:1765887947 nanos:831464708}" Dec 16 12:25:47.843523 containerd[1545]: time="2025-12-16T12:25:47.843450204Z" level=info msg="StartContainer for \"061ff68b42787cbefc17115b271613a903d413a8046db15ddd7f4c36232d7bc4\" returns successfully" Dec 16 12:25:47.860325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-061ff68b42787cbefc17115b271613a903d413a8046db15ddd7f4c36232d7bc4-rootfs.mount: Deactivated successfully. Dec 16 12:25:48.833752 containerd[1545]: time="2025-12-16T12:25:48.833694026Z" level=info msg="CreateContainer within sandbox \"4b2c9bb207ccd246065e21687d2cd68d598ff45b38a9eb46288111bddfd49b17\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 12:25:48.951789 containerd[1545]: time="2025-12-16T12:25:48.950928521Z" level=info msg="Container 89d49dbdf5afb236a6e77b3898e70699200035dc262b607ed780fb82cb34cb3a: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:25:48.975543 containerd[1545]: time="2025-12-16T12:25:48.975442793Z" level=info msg="CreateContainer within sandbox \"4b2c9bb207ccd246065e21687d2cd68d598ff45b38a9eb46288111bddfd49b17\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"89d49dbdf5afb236a6e77b3898e70699200035dc262b607ed780fb82cb34cb3a\"" Dec 16 12:25:48.976316 containerd[1545]: time="2025-12-16T12:25:48.976276717Z" level=info msg="StartContainer for \"89d49dbdf5afb236a6e77b3898e70699200035dc262b607ed780fb82cb34cb3a\"" Dec 16 12:25:48.977421 containerd[1545]: time="2025-12-16T12:25:48.977383402Z" level=info msg="connecting to shim 89d49dbdf5afb236a6e77b3898e70699200035dc262b607ed780fb82cb34cb3a" address="unix:///run/containerd/s/9a4284b1c5292d2224ac34134e3feaa70eca14e9275d36ad1a2450358d7177c2" protocol=ttrpc version=3 Dec 16 12:25:49.011304 systemd[1]: Started cri-containerd-89d49dbdf5afb236a6e77b3898e70699200035dc262b607ed780fb82cb34cb3a.scope - libcontainer container 89d49dbdf5afb236a6e77b3898e70699200035dc262b607ed780fb82cb34cb3a. Dec 16 12:25:49.091301 containerd[1545]: time="2025-12-16T12:25:49.091152111Z" level=info msg="StartContainer for \"89d49dbdf5afb236a6e77b3898e70699200035dc262b607ed780fb82cb34cb3a\" returns successfully" Dec 16 12:25:49.424049 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 16 12:25:49.747129 kubelet[2697]: I1216 12:25:49.746913 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-twtsn" podStartSLOduration=5.746890753 podStartE2EDuration="5.746890753s" podCreationTimestamp="2025-12-16 12:25:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:25:49.745653267 +0000 UTC m=+90.472099526" watchObservedRunningTime="2025-12-16 12:25:49.746890753 +0000 UTC m=+90.473337012" Dec 16 12:25:52.740215 systemd-networkd[1440]: lxc_health: Link UP Dec 16 12:25:52.740436 systemd-networkd[1440]: lxc_health: Gained carrier Dec 16 12:25:54.558284 systemd-networkd[1440]: lxc_health: Gained IPv6LL Dec 16 12:25:58.036928 sshd[4513]: Connection closed by 10.0.0.1 port 50436 Dec 16 12:25:58.037549 sshd-session[4506]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:58.041602 systemd[1]: sshd@26-10.0.0.37:22-10.0.0.1:50436.service: Deactivated successfully. Dec 16 12:25:58.043583 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 12:25:58.044614 systemd-logind[1519]: Session 27 logged out. Waiting for processes to exit. Dec 16 12:25:58.047722 systemd-logind[1519]: Removed session 27.