Dec 16 12:17:23.791611 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 16 12:17:23.791633 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 16 12:17:23.791643 kernel: KASLR enabled Dec 16 12:17:23.791649 kernel: efi: EFI v2.7 by EDK II Dec 16 12:17:23.791655 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Dec 16 12:17:23.791660 kernel: random: crng init done Dec 16 12:17:23.791667 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 16 12:17:23.791673 kernel: secureboot: Secure boot enabled Dec 16 12:17:23.791679 kernel: ACPI: Early table checksum verification disabled Dec 16 12:17:23.791686 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Dec 16 12:17:23.791692 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 16 12:17:23.791698 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:17:23.791704 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:17:23.791710 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:17:23.791717 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:17:23.791724 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:17:23.791730 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:17:23.791737 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:17:23.791743 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:17:23.791755 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:17:23.791762 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 16 12:17:23.791768 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:17:23.791774 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:17:23.791780 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Dec 16 12:17:23.791785 kernel: Zone ranges: Dec 16 12:17:23.791793 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:17:23.791799 kernel: DMA32 empty Dec 16 12:17:23.791804 kernel: Normal empty Dec 16 12:17:23.791810 kernel: Device empty Dec 16 12:17:23.791816 kernel: Movable zone start for each node Dec 16 12:17:23.791822 kernel: Early memory node ranges Dec 16 12:17:23.791846 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Dec 16 12:17:23.791853 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Dec 16 12:17:23.791859 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Dec 16 12:17:23.791865 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Dec 16 12:17:23.791871 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Dec 16 12:17:23.791877 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Dec 16 12:17:23.791885 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Dec 16 12:17:23.791891 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Dec 16 12:17:23.791897 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 16 12:17:23.791906 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:17:23.791912 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 16 12:17:23.791918 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Dec 16 12:17:23.791925 kernel: psci: probing for conduit method from ACPI. Dec 16 12:17:23.791933 kernel: psci: PSCIv1.1 detected in firmware. Dec 16 12:17:23.791939 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:17:23.791945 kernel: psci: Trusted OS migration not required Dec 16 12:17:23.791952 kernel: psci: SMC Calling Convention v1.1 Dec 16 12:17:23.791958 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 16 12:17:23.791965 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:17:23.791971 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:17:23.791978 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 16 12:17:23.791984 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:17:23.791992 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:17:23.791998 kernel: CPU features: detected: Spectre-v4 Dec 16 12:17:23.792004 kernel: CPU features: detected: Spectre-BHB Dec 16 12:17:23.792011 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:17:23.792017 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:17:23.792023 kernel: CPU features: detected: ARM erratum 1418040 Dec 16 12:17:23.792030 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:17:23.792036 kernel: alternatives: applying boot alternatives Dec 16 12:17:23.792043 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:17:23.792050 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:17:23.792057 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:17:23.792065 kernel: Fallback order for Node 0: 0 Dec 16 12:17:23.792071 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 16 12:17:23.792077 kernel: Policy zone: DMA Dec 16 12:17:23.792084 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:17:23.792090 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 16 12:17:23.792096 kernel: software IO TLB: area num 4. Dec 16 12:17:23.792102 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 16 12:17:23.792109 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Dec 16 12:17:23.792115 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 12:17:23.792122 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:17:23.792129 kernel: rcu: RCU event tracing is enabled. Dec 16 12:17:23.792135 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 12:17:23.792143 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:17:23.792149 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:17:23.792156 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:17:23.792162 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 12:17:23.792169 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:17:23.792175 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:17:23.792182 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:17:23.792188 kernel: GICv3: 256 SPIs implemented Dec 16 12:17:23.792195 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:17:23.792201 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:17:23.792207 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 16 12:17:23.792214 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 12:17:23.792222 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 16 12:17:23.792228 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 16 12:17:23.792235 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 16 12:17:23.792241 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 16 12:17:23.792248 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 16 12:17:23.792254 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 16 12:17:23.792261 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:17:23.792267 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:17:23.792274 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 16 12:17:23.792280 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 16 12:17:23.792287 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 16 12:17:23.792295 kernel: arm-pv: using stolen time PV Dec 16 12:17:23.792301 kernel: Console: colour dummy device 80x25 Dec 16 12:17:23.792308 kernel: ACPI: Core revision 20240827 Dec 16 12:17:23.792315 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 16 12:17:23.792321 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:17:23.792328 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:17:23.792334 kernel: landlock: Up and running. Dec 16 12:17:23.792341 kernel: SELinux: Initializing. Dec 16 12:17:23.792347 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:17:23.792355 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:17:23.792362 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:17:23.792369 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:17:23.792375 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:17:23.792382 kernel: Remapping and enabling EFI services. Dec 16 12:17:23.792389 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:17:23.792395 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:17:23.792402 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 16 12:17:23.792408 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 16 12:17:23.792416 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:17:23.792427 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 16 12:17:23.792434 kernel: Detected PIPT I-cache on CPU2 Dec 16 12:17:23.792442 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 16 12:17:23.792449 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 16 12:17:23.792456 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:17:23.792463 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 16 12:17:23.792470 kernel: Detected PIPT I-cache on CPU3 Dec 16 12:17:23.792478 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 16 12:17:23.792485 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 16 12:17:23.792492 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:17:23.792499 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 16 12:17:23.792567 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 12:17:23.792578 kernel: SMP: Total of 4 processors activated. Dec 16 12:17:23.792585 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:17:23.792592 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:17:23.792599 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:17:23.792606 kernel: CPU features: detected: Common not Private translations Dec 16 12:17:23.792617 kernel: CPU features: detected: CRC32 instructions Dec 16 12:17:23.792624 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 16 12:17:23.792631 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:17:23.792638 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:17:23.792645 kernel: CPU features: detected: Privileged Access Never Dec 16 12:17:23.792652 kernel: CPU features: detected: RAS Extension Support Dec 16 12:17:23.792659 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:17:23.792666 kernel: alternatives: applying system-wide alternatives Dec 16 12:17:23.792673 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 16 12:17:23.792682 kernel: Memory: 2421668K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 128284K reserved, 16384K cma-reserved) Dec 16 12:17:23.792689 kernel: devtmpfs: initialized Dec 16 12:17:23.792696 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:17:23.792703 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 12:17:23.792710 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:17:23.792717 kernel: 0 pages in range for non-PLT usage Dec 16 12:17:23.792724 kernel: 508400 pages in range for PLT usage Dec 16 12:17:23.792730 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:17:23.792737 kernel: SMBIOS 3.0.0 present. Dec 16 12:17:23.792746 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 16 12:17:23.792753 kernel: DMI: Memory slots populated: 1/1 Dec 16 12:17:23.792760 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:17:23.792767 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:17:23.792774 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:17:23.792781 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:17:23.792788 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:17:23.792795 kernel: audit: type=2000 audit(0.028:1): state=initialized audit_enabled=0 res=1 Dec 16 12:17:23.792802 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:17:23.792810 kernel: cpuidle: using governor menu Dec 16 12:17:23.792817 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:17:23.792824 kernel: ASID allocator initialised with 32768 entries Dec 16 12:17:23.792841 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:17:23.792848 kernel: Serial: AMBA PL011 UART driver Dec 16 12:17:23.792860 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:17:23.792867 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:17:23.792874 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:17:23.792881 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:17:23.792891 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:17:23.792898 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:17:23.792905 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:17:23.792912 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:17:23.792919 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:17:23.792926 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:17:23.792933 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:17:23.792955 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:17:23.792963 kernel: ACPI: Interpreter enabled Dec 16 12:17:23.792971 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:17:23.792978 kernel: ACPI: MCFG table detected, 1 entries Dec 16 12:17:23.792985 kernel: ACPI: CPU0 has been hot-added Dec 16 12:17:23.792992 kernel: ACPI: CPU1 has been hot-added Dec 16 12:17:23.792999 kernel: ACPI: CPU2 has been hot-added Dec 16 12:17:23.793006 kernel: ACPI: CPU3 has been hot-added Dec 16 12:17:23.793013 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:17:23.793020 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:17:23.793027 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 12:17:23.793183 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:17:23.793249 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 12:17:23.793307 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 12:17:23.793363 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 16 12:17:23.793418 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 16 12:17:23.793428 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 16 12:17:23.793435 kernel: PCI host bridge to bus 0000:00 Dec 16 12:17:23.793503 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 16 12:17:23.793569 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 12:17:23.793621 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 16 12:17:23.793671 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 12:17:23.793749 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:17:23.793818 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 12:17:23.793915 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 16 12:17:23.793976 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 16 12:17:23.794036 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 16 12:17:23.794095 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 16 12:17:23.794153 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 16 12:17:23.794211 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 16 12:17:23.794265 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 16 12:17:23.794317 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 12:17:23.794373 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 16 12:17:23.794383 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 12:17:23.794390 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 12:17:23.794398 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 12:17:23.794405 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 12:17:23.794412 kernel: iommu: Default domain type: Translated Dec 16 12:17:23.794419 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:17:23.794426 kernel: efivars: Registered efivars operations Dec 16 12:17:23.794435 kernel: vgaarb: loaded Dec 16 12:17:23.794442 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:17:23.794449 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:17:23.794456 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:17:23.794463 kernel: pnp: PnP ACPI init Dec 16 12:17:23.794544 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 16 12:17:23.794555 kernel: pnp: PnP ACPI: found 1 devices Dec 16 12:17:23.794562 kernel: NET: Registered PF_INET protocol family Dec 16 12:17:23.794571 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:17:23.794579 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:17:23.794586 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:17:23.794593 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:17:23.794600 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:17:23.794607 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:17:23.794614 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:17:23.794621 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:17:23.794629 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:17:23.794637 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:17:23.794645 kernel: kvm [1]: HYP mode not available Dec 16 12:17:23.794651 kernel: Initialise system trusted keyrings Dec 16 12:17:23.794658 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:17:23.794665 kernel: Key type asymmetric registered Dec 16 12:17:23.794673 kernel: Asymmetric key parser 'x509' registered Dec 16 12:17:23.794680 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:17:23.794688 kernel: io scheduler mq-deadline registered Dec 16 12:17:23.794695 kernel: io scheduler kyber registered Dec 16 12:17:23.794704 kernel: io scheduler bfq registered Dec 16 12:17:23.794711 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 12:17:23.794718 kernel: ACPI: button: Power Button [PWRB] Dec 16 12:17:23.794725 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 12:17:23.794786 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 16 12:17:23.794796 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:17:23.794803 kernel: thunder_xcv, ver 1.0 Dec 16 12:17:23.794810 kernel: thunder_bgx, ver 1.0 Dec 16 12:17:23.794817 kernel: nicpf, ver 1.0 Dec 16 12:17:23.794827 kernel: nicvf, ver 1.0 Dec 16 12:17:23.794933 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:17:23.794991 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:17:23 UTC (1765887443) Dec 16 12:17:23.795001 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:17:23.795008 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:17:23.795015 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:17:23.795022 kernel: watchdog: NMI not fully supported Dec 16 12:17:23.795029 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:17:23.795039 kernel: Segment Routing with IPv6 Dec 16 12:17:23.795046 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:17:23.795053 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:17:23.795060 kernel: Key type dns_resolver registered Dec 16 12:17:23.795066 kernel: registered taskstats version 1 Dec 16 12:17:23.795073 kernel: Loading compiled-in X.509 certificates Dec 16 12:17:23.795081 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 16 12:17:23.795087 kernel: Demotion targets for Node 0: null Dec 16 12:17:23.795094 kernel: Key type .fscrypt registered Dec 16 12:17:23.795103 kernel: Key type fscrypt-provisioning registered Dec 16 12:17:23.795110 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:17:23.795117 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:17:23.795124 kernel: ima: No architecture policies found Dec 16 12:17:23.795131 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:17:23.795138 kernel: clk: Disabling unused clocks Dec 16 12:17:23.795145 kernel: PM: genpd: Disabling unused power domains Dec 16 12:17:23.795152 kernel: Warning: unable to open an initial console. Dec 16 12:17:23.795159 kernel: Freeing unused kernel memory: 39552K Dec 16 12:17:23.795167 kernel: Run /init as init process Dec 16 12:17:23.795174 kernel: with arguments: Dec 16 12:17:23.795182 kernel: /init Dec 16 12:17:23.795189 kernel: with environment: Dec 16 12:17:23.795196 kernel: HOME=/ Dec 16 12:17:23.795203 kernel: TERM=linux Dec 16 12:17:23.795211 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:17:23.795221 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:17:23.795231 systemd[1]: Detected virtualization kvm. Dec 16 12:17:23.795239 systemd[1]: Detected architecture arm64. Dec 16 12:17:23.795246 systemd[1]: Running in initrd. Dec 16 12:17:23.795253 systemd[1]: No hostname configured, using default hostname. Dec 16 12:17:23.795261 systemd[1]: Hostname set to . Dec 16 12:17:23.795268 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:17:23.795275 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:17:23.795283 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:17:23.795292 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:17:23.795300 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:17:23.795308 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:17:23.795316 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:17:23.795324 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:17:23.795332 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 12:17:23.795341 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 12:17:23.795349 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:17:23.795357 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:17:23.795364 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:17:23.795372 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:17:23.795379 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:17:23.795387 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:17:23.795394 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:17:23.795402 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:17:23.795411 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:17:23.795418 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:17:23.795426 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:17:23.795434 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:17:23.795442 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:17:23.795449 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:17:23.795457 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:17:23.795465 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:17:23.795474 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:17:23.795482 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:17:23.795490 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:17:23.795498 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:17:23.795512 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:17:23.795522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:17:23.795529 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:17:23.795540 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:17:23.795548 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:17:23.795556 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:17:23.795587 systemd-journald[244]: Collecting audit messages is disabled. Dec 16 12:17:23.795609 systemd-journald[244]: Journal started Dec 16 12:17:23.795626 systemd-journald[244]: Runtime Journal (/run/log/journal/d0cf327524974267bb30c905e66b1fb0) is 6M, max 48.5M, 42.4M free. Dec 16 12:17:23.788541 systemd-modules-load[246]: Inserted module 'overlay' Dec 16 12:17:23.799324 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:17:23.800130 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:17:23.802377 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:17:23.803920 systemd-modules-load[246]: Inserted module 'br_netfilter' Dec 16 12:17:23.804886 kernel: Bridge firewalling registered Dec 16 12:17:23.805586 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:17:23.807434 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:17:23.809203 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:17:23.829010 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:17:23.831868 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:17:23.833921 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:17:23.843043 systemd-tmpfiles[263]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:17:23.845874 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:17:23.847064 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:17:23.849302 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:17:23.853547 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:17:23.857820 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:17:23.860276 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:17:23.881898 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:17:23.896414 systemd-resolved[288]: Positive Trust Anchors: Dec 16 12:17:23.896438 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:17:23.896468 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:17:23.901619 systemd-resolved[288]: Defaulting to hostname 'linux'. Dec 16 12:17:23.902678 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:17:23.905285 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:17:23.967867 kernel: SCSI subsystem initialized Dec 16 12:17:23.972851 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:17:23.980863 kernel: iscsi: registered transport (tcp) Dec 16 12:17:23.993858 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:17:23.993890 kernel: QLogic iSCSI HBA Driver Dec 16 12:17:24.012163 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:17:24.037487 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:17:24.039668 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:17:24.088411 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:17:24.090907 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:17:24.156873 kernel: raid6: neonx8 gen() 15325 MB/s Dec 16 12:17:24.173852 kernel: raid6: neonx4 gen() 15786 MB/s Dec 16 12:17:24.190876 kernel: raid6: neonx2 gen() 13004 MB/s Dec 16 12:17:24.207864 kernel: raid6: neonx1 gen() 10453 MB/s Dec 16 12:17:24.224858 kernel: raid6: int64x8 gen() 6834 MB/s Dec 16 12:17:24.241861 kernel: raid6: int64x4 gen() 7151 MB/s Dec 16 12:17:24.258877 kernel: raid6: int64x2 gen() 6083 MB/s Dec 16 12:17:24.275989 kernel: raid6: int64x1 gen() 4911 MB/s Dec 16 12:17:24.276009 kernel: raid6: using algorithm neonx4 gen() 15786 MB/s Dec 16 12:17:24.293871 kernel: raid6: .... xor() 12289 MB/s, rmw enabled Dec 16 12:17:24.293902 kernel: raid6: using neon recovery algorithm Dec 16 12:17:24.298859 kernel: xor: measuring software checksum speed Dec 16 12:17:24.300094 kernel: 8regs : 18860 MB/sec Dec 16 12:17:24.300124 kernel: 32regs : 21009 MB/sec Dec 16 12:17:24.301293 kernel: arm64_neon : 27766 MB/sec Dec 16 12:17:24.301309 kernel: xor: using function: arm64_neon (27766 MB/sec) Dec 16 12:17:24.355870 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:17:24.363581 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:17:24.366271 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:17:24.401516 systemd-udevd[497]: Using default interface naming scheme 'v255'. Dec 16 12:17:24.405816 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:17:24.408123 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:17:24.432723 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Dec 16 12:17:24.457944 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:17:24.460376 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:17:24.530117 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:17:24.533062 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:17:24.584862 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 16 12:17:24.586866 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 16 12:17:24.596959 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:17:24.597031 kernel: GPT:9289727 != 19775487 Dec 16 12:17:24.598178 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:17:24.598206 kernel: GPT:9289727 != 19775487 Dec 16 12:17:24.598843 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:17:24.599864 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:17:24.606113 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:17:24.606360 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:17:24.610014 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:17:24.613157 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:17:24.636209 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 12:17:24.644769 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:17:24.651190 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:17:24.666215 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 12:17:24.673486 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 16 12:17:24.674724 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 12:17:24.684146 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:17:24.685410 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:17:24.687428 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:17:24.689534 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:17:24.692430 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:17:24.694395 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:17:24.716660 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:17:24.763890 disk-uuid[587]: Primary Header is updated. Dec 16 12:17:24.763890 disk-uuid[587]: Secondary Entries is updated. Dec 16 12:17:24.763890 disk-uuid[587]: Secondary Header is updated. Dec 16 12:17:24.768867 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:17:25.780894 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:17:25.782163 disk-uuid[595]: The operation has completed successfully. Dec 16 12:17:25.829826 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:17:25.829957 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:17:25.861887 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 12:17:25.901086 sh[607]: Success Dec 16 12:17:25.917940 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:17:25.918013 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:17:25.920865 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:17:25.932889 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:17:25.978167 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:17:25.981798 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 12:17:25.997076 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 12:17:26.007398 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (619) Dec 16 12:17:26.007460 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 16 12:17:26.007472 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:17:26.015553 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:17:26.015637 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:17:26.016933 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 12:17:26.018677 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:17:26.020046 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:17:26.021035 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:17:26.025744 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:17:26.065895 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (649) Dec 16 12:17:26.068969 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:17:26.069032 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:17:26.073880 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:17:26.073950 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:17:26.083341 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:17:26.086910 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:17:26.089982 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:17:26.172884 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:17:26.176453 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:17:26.226811 systemd-networkd[801]: lo: Link UP Dec 16 12:17:26.226824 systemd-networkd[801]: lo: Gained carrier Dec 16 12:17:26.227694 systemd-networkd[801]: Enumeration completed Dec 16 12:17:26.227860 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:17:26.228238 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:17:26.231993 ignition[708]: Ignition 2.22.0 Dec 16 12:17:26.228242 systemd-networkd[801]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:17:26.232000 ignition[708]: Stage: fetch-offline Dec 16 12:17:26.228780 systemd-networkd[801]: eth0: Link UP Dec 16 12:17:26.232040 ignition[708]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:17:26.229283 systemd-networkd[801]: eth0: Gained carrier Dec 16 12:17:26.232048 ignition[708]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:17:26.229296 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:17:26.232153 ignition[708]: parsed url from cmdline: "" Dec 16 12:17:26.230296 systemd[1]: Reached target network.target - Network. Dec 16 12:17:26.232157 ignition[708]: no config URL provided Dec 16 12:17:26.232162 ignition[708]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:17:26.232170 ignition[708]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:17:26.232194 ignition[708]: op(1): [started] loading QEMU firmware config module Dec 16 12:17:26.232199 ignition[708]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 12:17:26.241882 ignition[708]: op(1): [finished] loading QEMU firmware config module Dec 16 12:17:26.252912 systemd-networkd[801]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:17:26.289104 ignition[708]: parsing config with SHA512: ce08f46dd92ac0c8e7295e98c0b9caa65c5757a066a7f61de7c2d9d1f046e4f113fb14a5707f6ad6c3953dfc4f56606581f1f3f7ec0d5d4b42706f8eda7a4bdb Dec 16 12:17:26.295361 unknown[708]: fetched base config from "system" Dec 16 12:17:26.296186 unknown[708]: fetched user config from "qemu" Dec 16 12:17:26.296650 ignition[708]: fetch-offline: fetch-offline passed Dec 16 12:17:26.296749 ignition[708]: Ignition finished successfully Dec 16 12:17:26.298843 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:17:26.300685 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 12:17:26.301563 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:17:26.345602 ignition[809]: Ignition 2.22.0 Dec 16 12:17:26.345619 ignition[809]: Stage: kargs Dec 16 12:17:26.345783 ignition[809]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:17:26.345793 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:17:26.349131 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:17:26.346733 ignition[809]: kargs: kargs passed Dec 16 12:17:26.346787 ignition[809]: Ignition finished successfully Dec 16 12:17:26.356796 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:17:26.396216 ignition[817]: Ignition 2.22.0 Dec 16 12:17:26.396239 ignition[817]: Stage: disks Dec 16 12:17:26.396396 ignition[817]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:17:26.396405 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:17:26.397405 ignition[817]: disks: disks passed Dec 16 12:17:26.399777 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:17:26.397465 ignition[817]: Ignition finished successfully Dec 16 12:17:26.401944 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:17:26.403111 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:17:26.404942 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:17:26.406323 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:17:26.408078 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:17:26.410991 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:17:26.436957 systemd-resolved[288]: Detected conflict on linux IN A 10.0.0.13 Dec 16 12:17:26.436974 systemd-resolved[288]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Dec 16 12:17:26.440020 systemd-fsck[826]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 12:17:26.449511 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:17:26.452716 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:17:26.535875 kernel: EXT4-fs (vda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 16 12:17:26.536687 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:17:26.538136 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:17:26.541820 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:17:26.544294 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:17:26.545287 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:17:26.545336 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:17:26.545364 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:17:26.554054 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:17:26.556987 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:17:26.562851 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (834) Dec 16 12:17:26.565037 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:17:26.565087 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:17:26.573525 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:17:26.573606 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:17:26.575744 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:17:26.614055 initrd-setup-root[858]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:17:26.617956 initrd-setup-root[865]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:17:26.622158 initrd-setup-root[872]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:17:26.626314 initrd-setup-root[879]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:17:26.717048 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:17:26.719647 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:17:26.721652 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:17:26.745877 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:17:26.766026 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:17:26.789715 ignition[948]: INFO : Ignition 2.22.0 Dec 16 12:17:26.789715 ignition[948]: INFO : Stage: mount Dec 16 12:17:26.791255 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:17:26.791255 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:17:26.791255 ignition[948]: INFO : mount: mount passed Dec 16 12:17:26.791255 ignition[948]: INFO : Ignition finished successfully Dec 16 12:17:26.793699 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:17:26.795809 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:17:27.005781 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:17:27.007466 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:17:27.038869 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (960) Dec 16 12:17:27.040879 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:17:27.040928 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:17:27.045869 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:17:27.045938 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:17:27.047368 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:17:27.103998 ignition[978]: INFO : Ignition 2.22.0 Dec 16 12:17:27.103998 ignition[978]: INFO : Stage: files Dec 16 12:17:27.106314 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:17:27.106314 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:17:27.106314 ignition[978]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:17:27.109879 ignition[978]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:17:27.109879 ignition[978]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:17:27.113053 ignition[978]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:17:27.114441 ignition[978]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:17:27.114441 ignition[978]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:17:27.113982 unknown[978]: wrote ssh authorized keys file for user: core Dec 16 12:17:27.122599 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 12:17:27.124584 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 16 12:17:27.168453 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:17:27.294338 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 12:17:27.294338 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 12:17:27.298738 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 16 12:17:27.493347 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 16 12:17:27.556526 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 12:17:27.556526 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:17:27.561323 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:17:27.561323 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:17:27.561323 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:17:27.561323 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:17:27.561323 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:17:27.561323 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:17:27.561323 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:17:27.561323 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:17:27.561323 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:17:27.561323 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:17:27.577356 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:17:27.577356 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:17:27.577356 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 16 12:17:27.728036 systemd-networkd[801]: eth0: Gained IPv6LL Dec 16 12:17:27.783386 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 16 12:17:28.001565 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:17:28.001565 ignition[978]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 12:17:28.005569 ignition[978]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:17:28.064448 ignition[978]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:17:28.064448 ignition[978]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 12:17:28.064448 ignition[978]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 16 12:17:28.064448 ignition[978]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:17:28.070936 ignition[978]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:17:28.070936 ignition[978]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 16 12:17:28.070936 ignition[978]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 12:17:28.086105 ignition[978]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:17:28.090926 ignition[978]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:17:28.093701 ignition[978]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 12:17:28.093701 ignition[978]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:17:28.093701 ignition[978]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:17:28.093701 ignition[978]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:17:28.093701 ignition[978]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:17:28.093701 ignition[978]: INFO : files: files passed Dec 16 12:17:28.093701 ignition[978]: INFO : Ignition finished successfully Dec 16 12:17:28.094265 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:17:28.097185 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:17:28.100309 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:17:28.115348 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:17:28.115475 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:17:28.119360 initrd-setup-root-after-ignition[1007]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 12:17:28.121758 initrd-setup-root-after-ignition[1009]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:17:28.121758 initrd-setup-root-after-ignition[1009]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:17:28.124824 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:17:28.125857 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:17:28.129256 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:17:28.132435 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:17:28.211088 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:17:28.211441 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:17:28.214181 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:17:28.215621 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:17:28.217512 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:17:28.218476 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:17:28.251044 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:17:28.253772 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:17:28.271766 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:17:28.272986 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:17:28.274897 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:17:28.276246 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:17:28.276386 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:17:28.278870 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:17:28.280803 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:17:28.282306 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:17:28.284092 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:17:28.285877 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:17:28.287695 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:17:28.289654 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:17:28.291633 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:17:28.293271 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:17:28.294960 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:17:28.296551 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:17:28.297844 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:17:28.297986 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:17:28.299991 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:17:28.301694 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:17:28.303377 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:17:28.303530 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:17:28.305318 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:17:28.305460 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:17:28.307846 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:17:28.308033 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:17:28.309679 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:17:28.310973 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:17:28.311103 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:17:28.312812 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:17:28.314397 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:17:28.315696 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:17:28.315792 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:17:28.317430 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:17:28.317542 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:17:28.319210 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:17:28.319344 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:17:28.320875 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:17:28.320978 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:17:28.323302 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:17:28.325261 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:17:28.326827 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:17:28.326973 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:17:28.328731 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:17:28.328846 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:17:28.334313 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:17:28.341032 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:17:28.350308 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:17:28.366667 ignition[1034]: INFO : Ignition 2.22.0 Dec 16 12:17:28.366667 ignition[1034]: INFO : Stage: umount Dec 16 12:17:28.368264 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:17:28.368264 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:17:28.371057 ignition[1034]: INFO : umount: umount passed Dec 16 12:17:28.371057 ignition[1034]: INFO : Ignition finished successfully Dec 16 12:17:28.371219 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:17:28.371323 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:17:28.372744 systemd[1]: Stopped target network.target - Network. Dec 16 12:17:28.374177 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:17:28.374252 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:17:28.376460 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:17:28.376537 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:17:28.378094 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:17:28.378156 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:17:28.379396 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:17:28.379445 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:17:28.381166 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:17:28.392814 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:17:28.404232 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:17:28.404347 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:17:28.409016 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 12:17:28.409319 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:17:28.409364 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:17:28.413219 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:17:28.414083 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:17:28.414188 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:17:28.418408 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 12:17:28.418734 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:17:28.421000 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:17:28.421042 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:17:28.423742 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:17:28.425367 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:17:28.425435 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:17:28.427651 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:17:28.427712 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:17:28.430528 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:17:28.430581 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:17:28.432444 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:17:28.437536 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 12:17:28.451721 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:17:28.451925 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:17:28.455272 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:17:28.455314 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:17:28.458812 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:17:28.458865 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:17:28.460704 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:17:28.460771 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:17:28.463204 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:17:28.463271 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:17:28.465607 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:17:28.465681 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:17:28.468963 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:17:28.470763 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:17:28.470857 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:17:28.474112 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:17:28.474171 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:17:28.476574 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 12:17:28.476632 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:17:28.479678 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:17:28.479740 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:17:28.481847 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:17:28.481907 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:17:28.484995 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:17:28.485086 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:17:28.486332 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:17:28.486419 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:17:28.489299 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:17:28.489389 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:17:28.491902 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:17:28.492026 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:17:28.493538 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:17:28.496646 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:17:28.528707 systemd[1]: Switching root. Dec 16 12:17:28.567796 systemd-journald[244]: Journal stopped Dec 16 12:17:29.656385 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Dec 16 12:17:29.656438 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:17:29.656450 kernel: SELinux: policy capability open_perms=1 Dec 16 12:17:29.656460 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:17:29.656469 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:17:29.656478 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:17:29.656498 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:17:29.656519 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:17:29.656535 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:17:29.656544 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:17:29.656557 kernel: audit: type=1403 audit(1765887448.945:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 12:17:29.656572 systemd[1]: Successfully loaded SELinux policy in 61.067ms. Dec 16 12:17:29.656585 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.033ms. Dec 16 12:17:29.656596 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:17:29.656607 systemd[1]: Detected virtualization kvm. Dec 16 12:17:29.656618 systemd[1]: Detected architecture arm64. Dec 16 12:17:29.656628 systemd[1]: Detected first boot. Dec 16 12:17:29.656637 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:17:29.656647 zram_generator::config[1079]: No configuration found. Dec 16 12:17:29.656659 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:17:29.656668 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:17:29.656679 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 12:17:29.656689 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:17:29.656699 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:17:29.656709 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:17:29.656719 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:17:29.656729 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:17:29.656741 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:17:29.656751 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:17:29.656762 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:17:29.656774 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:17:29.656785 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:17:29.656794 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:17:29.656805 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:17:29.656815 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:17:29.656825 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:17:29.656858 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:17:29.656870 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:17:29.656881 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:17:29.656891 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:17:29.656902 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:17:29.656912 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:17:29.656922 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:17:29.656932 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:17:29.656944 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:17:29.656955 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:17:29.656966 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:17:29.656976 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:17:29.656986 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:17:29.656996 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:17:29.657006 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:17:29.657017 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:17:29.657027 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:17:29.657038 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:17:29.657049 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:17:29.657059 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:17:29.657073 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:17:29.657083 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:17:29.657093 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:17:29.657103 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:17:29.657112 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:17:29.657122 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:17:29.657134 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:17:29.657144 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:17:29.657154 systemd[1]: Reached target machines.target - Containers. Dec 16 12:17:29.657164 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:17:29.657175 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:17:29.657185 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:17:29.657195 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:17:29.657205 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:17:29.657216 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:17:29.657227 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:17:29.657238 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:17:29.657247 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:17:29.657258 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:17:29.657269 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:17:29.657279 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:17:29.657288 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:17:29.657299 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:17:29.657311 kernel: fuse: init (API version 7.41) Dec 16 12:17:29.657321 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:17:29.657331 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:17:29.657341 kernel: loop: module loaded Dec 16 12:17:29.657351 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:17:29.657361 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:17:29.657372 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:17:29.657386 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:17:29.657395 kernel: ACPI: bus type drm_connector registered Dec 16 12:17:29.657407 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:17:29.657474 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 12:17:29.657492 systemd[1]: Stopped verity-setup.service. Dec 16 12:17:29.657505 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:17:29.657515 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:17:29.657527 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:17:29.657563 systemd-journald[1151]: Collecting audit messages is disabled. Dec 16 12:17:29.657584 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:17:29.657595 systemd-journald[1151]: Journal started Dec 16 12:17:29.657616 systemd-journald[1151]: Runtime Journal (/run/log/journal/d0cf327524974267bb30c905e66b1fb0) is 6M, max 48.5M, 42.4M free. Dec 16 12:17:29.415576 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:17:29.434014 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 12:17:29.434412 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:17:29.659870 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:17:29.660610 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:17:29.661745 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:17:29.664846 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:17:29.666204 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:17:29.667745 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:17:29.668002 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:17:29.669390 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:17:29.669585 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:17:29.671151 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:17:29.671320 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:17:29.674203 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:17:29.674387 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:17:29.676380 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:17:29.676583 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:17:29.677871 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:17:29.678039 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:17:29.679427 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:17:29.680697 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:17:29.682179 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:17:29.683865 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:17:29.697012 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:17:29.699467 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:17:29.701705 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:17:29.702906 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:17:29.702970 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:17:29.704771 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:17:29.708903 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:17:29.709986 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:17:29.711282 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:17:29.713388 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:17:29.714874 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:17:29.718013 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:17:29.719143 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:17:29.720664 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:17:29.723079 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:17:29.724703 systemd-journald[1151]: Time spent on flushing to /var/log/journal/d0cf327524974267bb30c905e66b1fb0 is 16.303ms for 887 entries. Dec 16 12:17:29.724703 systemd-journald[1151]: System Journal (/var/log/journal/d0cf327524974267bb30c905e66b1fb0) is 8M, max 195.6M, 187.6M free. Dec 16 12:17:29.756641 systemd-journald[1151]: Received client request to flush runtime journal. Dec 16 12:17:29.756778 kernel: loop0: detected capacity change from 0 to 207008 Dec 16 12:17:29.727096 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:17:29.732340 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:17:29.733919 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:17:29.735463 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:17:29.756925 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:17:29.758990 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:17:29.761666 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:17:29.765169 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:17:29.768403 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:17:29.771062 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:17:29.771859 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Dec 16 12:17:29.771875 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Dec 16 12:17:29.775826 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:17:29.781437 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:17:29.790919 kernel: loop1: detected capacity change from 0 to 100632 Dec 16 12:17:29.809411 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:17:29.815754 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:17:29.818727 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:17:29.830914 kernel: loop2: detected capacity change from 0 to 119840 Dec 16 12:17:29.837388 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Dec 16 12:17:29.837407 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Dec 16 12:17:29.841649 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:17:29.859881 kernel: loop3: detected capacity change from 0 to 207008 Dec 16 12:17:29.866870 kernel: loop4: detected capacity change from 0 to 100632 Dec 16 12:17:29.874853 kernel: loop5: detected capacity change from 0 to 119840 Dec 16 12:17:29.880166 (sd-merge)[1222]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 16 12:17:29.880569 (sd-merge)[1222]: Merged extensions into '/usr'. Dec 16 12:17:29.884266 systemd[1]: Reload requested from client PID 1195 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:17:29.884464 systemd[1]: Reloading... Dec 16 12:17:29.943602 zram_generator::config[1244]: No configuration found. Dec 16 12:17:30.042083 ldconfig[1190]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:17:30.098879 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:17:30.099054 systemd[1]: Reloading finished in 214 ms. Dec 16 12:17:30.124873 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:17:30.126204 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:17:30.145753 systemd[1]: Starting ensure-sysext.service... Dec 16 12:17:30.148703 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:17:30.160130 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:17:30.160162 systemd[1]: Reloading... Dec 16 12:17:30.167096 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:17:30.167131 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:17:30.167448 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:17:30.167662 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 12:17:30.168616 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 12:17:30.168859 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Dec 16 12:17:30.168919 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Dec 16 12:17:30.172198 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:17:30.172210 systemd-tmpfiles[1286]: Skipping /boot Dec 16 12:17:30.182446 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:17:30.182559 systemd-tmpfiles[1286]: Skipping /boot Dec 16 12:17:30.206878 zram_generator::config[1313]: No configuration found. Dec 16 12:17:30.345958 systemd[1]: Reloading finished in 185 ms. Dec 16 12:17:30.369629 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:17:30.375704 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:17:30.387952 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:17:30.390633 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:17:30.393076 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:17:30.397047 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:17:30.400750 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:17:30.405125 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:17:30.411238 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:17:30.421565 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:17:30.424999 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:17:30.427148 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:17:30.430060 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:17:30.443297 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:17:30.444606 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:17:30.444759 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:17:30.447392 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:17:30.453260 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:17:30.453459 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:17:30.453527 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Dec 16 12:17:30.455371 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:17:30.455555 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:17:30.465999 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:17:30.466220 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:17:30.471112 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:17:30.473239 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:17:30.473779 augenrules[1387]: No rules Dec 16 12:17:30.479487 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:17:30.485418 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:17:30.487086 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:17:30.494295 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:17:30.495946 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:17:30.507785 systemd[1]: Finished ensure-sysext.service. Dec 16 12:17:30.512772 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:17:30.517320 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:17:30.519592 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:17:30.522047 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:17:30.525680 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:17:30.526816 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:17:30.527945 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:17:30.535738 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:17:30.540190 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 12:17:30.541548 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:17:30.542292 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:17:30.544268 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:17:30.545724 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:17:30.546064 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:17:30.547762 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:17:30.548084 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:17:30.550019 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:17:30.550298 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:17:30.558634 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:17:30.558705 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:17:30.572065 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:17:30.615098 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:17:30.617567 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:17:30.640255 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:17:30.671516 systemd-resolved[1352]: Positive Trust Anchors: Dec 16 12:17:30.671535 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:17:30.671567 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:17:30.681127 systemd-resolved[1352]: Defaulting to hostname 'linux'. Dec 16 12:17:30.682582 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:17:30.683772 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:17:30.691935 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 12:17:30.694097 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:17:30.694701 systemd-networkd[1429]: lo: Link UP Dec 16 12:17:30.695004 systemd-networkd[1429]: lo: Gained carrier Dec 16 12:17:30.695130 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:17:30.696065 systemd-networkd[1429]: Enumeration completed Dec 16 12:17:30.696339 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:17:30.696760 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:17:30.696917 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:17:30.697685 systemd-networkd[1429]: eth0: Link UP Dec 16 12:17:30.698169 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:17:30.698996 systemd-networkd[1429]: eth0: Gained carrier Dec 16 12:17:30.700903 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:17:30.700989 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:17:30.701027 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:17:30.701892 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:17:30.703078 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:17:30.704135 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:17:30.705261 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:17:30.707427 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:17:30.710004 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:17:30.713623 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:17:30.716195 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:17:30.718135 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:17:30.721739 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:17:30.723300 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:17:30.723914 systemd-networkd[1429]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:17:30.724612 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. Dec 16 12:17:30.725338 systemd-timesyncd[1430]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 12:17:30.725397 systemd-timesyncd[1430]: Initial clock synchronization to Tue 2025-12-16 12:17:30.347134 UTC. Dec 16 12:17:30.726108 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:17:30.729300 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:17:30.730528 systemd[1]: Reached target network.target - Network. Dec 16 12:17:30.731384 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:17:30.732245 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:17:30.733245 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:17:30.733276 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:17:30.735092 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:17:30.737321 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:17:30.740237 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:17:30.753076 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:17:30.755231 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:17:30.756347 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:17:30.758088 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:17:30.762169 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:17:30.764308 jq[1467]: false Dec 16 12:17:30.764421 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:17:30.772050 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:17:30.775214 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:17:30.780163 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:17:30.785069 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:17:30.786141 extend-filesystems[1468]: Found /dev/vda6 Dec 16 12:17:30.787997 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:17:30.788538 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:17:30.789181 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:17:30.793933 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:17:30.799605 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:17:30.800872 extend-filesystems[1468]: Found /dev/vda9 Dec 16 12:17:30.801240 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:17:30.804013 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:17:30.804341 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:17:30.805259 extend-filesystems[1468]: Checking size of /dev/vda9 Dec 16 12:17:30.806159 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:17:30.809861 jq[1485]: true Dec 16 12:17:30.811608 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:17:30.811825 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:17:30.825588 extend-filesystems[1468]: Resized partition /dev/vda9 Dec 16 12:17:30.826537 update_engine[1484]: I20251216 12:17:30.825378 1484 main.cc:92] Flatcar Update Engine starting Dec 16 12:17:30.833341 (ntainerd)[1497]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 12:17:30.836024 extend-filesystems[1509]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:17:30.845235 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:17:30.847874 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 16 12:17:30.849724 tar[1494]: linux-arm64/LICENSE Dec 16 12:17:30.850042 tar[1494]: linux-arm64/helm Dec 16 12:17:30.853055 jq[1496]: true Dec 16 12:17:30.868300 dbus-daemon[1465]: [system] SELinux support is enabled Dec 16 12:17:30.868570 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:17:30.880874 update_engine[1484]: I20251216 12:17:30.879386 1484 update_check_scheduler.cc:74] Next update check in 9m8s Dec 16 12:17:30.882852 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 16 12:17:30.887903 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:17:30.897800 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:17:30.898296 systemd-logind[1474]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 12:17:30.899227 extend-filesystems[1509]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 12:17:30.899227 extend-filesystems[1509]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 12:17:30.899227 extend-filesystems[1509]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 16 12:17:30.914849 extend-filesystems[1468]: Resized filesystem in /dev/vda9 Dec 16 12:17:30.917850 bash[1531]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:17:30.904273 systemd-logind[1474]: New seat seat0. Dec 16 12:17:30.905724 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:17:30.908950 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:17:30.911870 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:17:30.914219 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:17:30.922181 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:17:30.922358 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:17:30.922487 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:17:30.927116 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:17:30.927234 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:17:30.932216 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:17:30.959704 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:17:30.994809 locksmithd[1536]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:17:31.033888 containerd[1497]: time="2025-12-16T12:17:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:17:31.034290 containerd[1497]: time="2025-12-16T12:17:31.034251704Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 12:17:31.044837 containerd[1497]: time="2025-12-16T12:17:31.042985161Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.766µs" Dec 16 12:17:31.044837 containerd[1497]: time="2025-12-16T12:17:31.043032836Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:17:31.044837 containerd[1497]: time="2025-12-16T12:17:31.043168620Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:17:31.044837 containerd[1497]: time="2025-12-16T12:17:31.043352345Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:17:31.044837 containerd[1497]: time="2025-12-16T12:17:31.043379365Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:17:31.044837 containerd[1497]: time="2025-12-16T12:17:31.043418732Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:17:31.044837 containerd[1497]: time="2025-12-16T12:17:31.043481307Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:17:31.044837 containerd[1497]: time="2025-12-16T12:17:31.043498838Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:17:31.044837 containerd[1497]: time="2025-12-16T12:17:31.043761335Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:17:31.044837 containerd[1497]: time="2025-12-16T12:17:31.043778141Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:17:31.044837 containerd[1497]: time="2025-12-16T12:17:31.043804856Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:17:31.044837 containerd[1497]: time="2025-12-16T12:17:31.043821243Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:17:31.045094 containerd[1497]: time="2025-12-16T12:17:31.043923300Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:17:31.045094 containerd[1497]: time="2025-12-16T12:17:31.044121736Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:17:31.045094 containerd[1497]: time="2025-12-16T12:17:31.044155844Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:17:31.045094 containerd[1497]: time="2025-12-16T12:17:31.044167277Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:17:31.045094 containerd[1497]: time="2025-12-16T12:17:31.044206415Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:17:31.045429 containerd[1497]: time="2025-12-16T12:17:31.045390284Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:17:31.045614 containerd[1497]: time="2025-12-16T12:17:31.045591578Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:17:31.178285 containerd[1497]: time="2025-12-16T12:17:31.178165136Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:17:31.178285 containerd[1497]: time="2025-12-16T12:17:31.178271461Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:17:31.178403 containerd[1497]: time="2025-12-16T12:17:31.178295013Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:17:31.178450 containerd[1497]: time="2025-12-16T12:17:31.178426566Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:17:31.178474 containerd[1497]: time="2025-12-16T12:17:31.178454806Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:17:31.178474 containerd[1497]: time="2025-12-16T12:17:31.178468144Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:17:31.178514 containerd[1497]: time="2025-12-16T12:17:31.178482587Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:17:31.178514 containerd[1497]: time="2025-12-16T12:17:31.178495201Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:17:31.178514 containerd[1497]: time="2025-12-16T12:17:31.178507282Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:17:31.178567 containerd[1497]: time="2025-12-16T12:17:31.178521192Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:17:31.178567 containerd[1497]: time="2025-12-16T12:17:31.178530681Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:17:31.178599 containerd[1497]: time="2025-12-16T12:17:31.178543143Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:17:31.178870 containerd[1497]: time="2025-12-16T12:17:31.178825991Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:17:31.178896 containerd[1497]: time="2025-12-16T12:17:31.178882279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:17:31.178929 containerd[1497]: time="2025-12-16T12:17:31.178899771Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:17:31.179046 containerd[1497]: time="2025-12-16T12:17:31.178962728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:17:31.179046 containerd[1497]: time="2025-12-16T12:17:31.178985212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:17:31.179046 containerd[1497]: time="2025-12-16T12:17:31.178998322Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:17:31.179046 containerd[1497]: time="2025-12-16T12:17:31.179013413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:17:31.179046 containerd[1497]: time="2025-12-16T12:17:31.179028848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:17:31.179228 containerd[1497]: time="2025-12-16T12:17:31.179048779Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:17:31.179228 containerd[1497]: time="2025-12-16T12:17:31.179062117Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:17:31.179228 containerd[1497]: time="2025-12-16T12:17:31.179117109Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:17:31.179379 containerd[1497]: time="2025-12-16T12:17:31.179359332Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:17:31.179403 containerd[1497]: time="2025-12-16T12:17:31.179384866Z" level=info msg="Start snapshots syncer" Dec 16 12:17:31.179513 containerd[1497]: time="2025-12-16T12:17:31.179478196Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:17:31.180093 containerd[1497]: time="2025-12-16T12:17:31.180046369Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:17:31.180357 containerd[1497]: time="2025-12-16T12:17:31.180121101Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:17:31.180357 containerd[1497]: time="2025-12-16T12:17:31.180305932Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:17:31.180720 containerd[1497]: time="2025-12-16T12:17:31.180693085Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:17:31.180841 containerd[1497]: time="2025-12-16T12:17:31.180782033Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:17:31.180841 containerd[1497]: time="2025-12-16T12:17:31.180812444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:17:31.180841 containerd[1497]: time="2025-12-16T12:17:31.180838968Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:17:31.180903 containerd[1497]: time="2025-12-16T12:17:31.180854326Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:17:31.180903 containerd[1497]: time="2025-12-16T12:17:31.180866369Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:17:31.180903 containerd[1497]: time="2025-12-16T12:17:31.180878488Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:17:31.180903 containerd[1497]: time="2025-12-16T12:17:31.180908175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:17:31.180903 containerd[1497]: time="2025-12-16T12:17:31.180964539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:17:31.181162 containerd[1497]: time="2025-12-16T12:17:31.180982145Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:17:31.181162 containerd[1497]: time="2025-12-16T12:17:31.181044683Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:17:31.181162 containerd[1497]: time="2025-12-16T12:17:31.181061641Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:17:31.181162 containerd[1497]: time="2025-12-16T12:17:31.181071512Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:17:31.181162 containerd[1497]: time="2025-12-16T12:17:31.181081306Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:17:31.181339 containerd[1497]: time="2025-12-16T12:17:31.181088661Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:17:31.181339 containerd[1497]: time="2025-12-16T12:17:31.181209430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:17:31.181339 containerd[1497]: time="2025-12-16T12:17:31.181221510Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:17:31.181482 containerd[1497]: time="2025-12-16T12:17:31.181341212Z" level=info msg="runtime interface created" Dec 16 12:17:31.181482 containerd[1497]: time="2025-12-16T12:17:31.181351845Z" level=info msg="created NRI interface" Dec 16 12:17:31.181482 containerd[1497]: time="2025-12-16T12:17:31.181362515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:17:31.181482 containerd[1497]: time="2025-12-16T12:17:31.181377835Z" level=info msg="Connect containerd service" Dec 16 12:17:31.181482 containerd[1497]: time="2025-12-16T12:17:31.181408704Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:17:31.183199 containerd[1497]: time="2025-12-16T12:17:31.182898934Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:17:31.185958 tar[1494]: linux-arm64/README.md Dec 16 12:17:31.204435 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:17:31.257697 containerd[1497]: time="2025-12-16T12:17:31.257625308Z" level=info msg="Start subscribing containerd event" Dec 16 12:17:31.257697 containerd[1497]: time="2025-12-16T12:17:31.257685826Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:17:31.257821 containerd[1497]: time="2025-12-16T12:17:31.257713722Z" level=info msg="Start recovering state" Dec 16 12:17:31.257821 containerd[1497]: time="2025-12-16T12:17:31.257754499Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:17:31.257821 containerd[1497]: time="2025-12-16T12:17:31.257811473Z" level=info msg="Start event monitor" Dec 16 12:17:31.257888 containerd[1497]: time="2025-12-16T12:17:31.257845200Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:17:31.257888 containerd[1497]: time="2025-12-16T12:17:31.257854575Z" level=info msg="Start streaming server" Dec 16 12:17:31.257888 containerd[1497]: time="2025-12-16T12:17:31.257867036Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:17:31.257888 containerd[1497]: time="2025-12-16T12:17:31.257874658Z" level=info msg="runtime interface starting up..." Dec 16 12:17:31.257888 containerd[1497]: time="2025-12-16T12:17:31.257880718Z" level=info msg="starting plugins..." Dec 16 12:17:31.257973 containerd[1497]: time="2025-12-16T12:17:31.257895390Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:17:31.258051 containerd[1497]: time="2025-12-16T12:17:31.258027972Z" level=info msg="containerd successfully booted in 0.224899s" Dec 16 12:17:31.258158 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:17:32.334358 sshd_keygen[1492]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:17:32.356496 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:17:32.359927 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:17:32.387303 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:17:32.387523 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:17:32.390518 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:17:32.428883 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:17:32.431976 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:17:32.434360 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:17:32.435671 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:17:32.720006 systemd-networkd[1429]: eth0: Gained IPv6LL Dec 16 12:17:32.723257 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:17:32.725449 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:17:32.730453 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 12:17:32.749519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:17:32.752626 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:17:32.776192 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:17:32.777751 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 12:17:32.777999 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 12:17:32.780503 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:17:33.356733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:17:33.358868 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:17:33.361036 systemd[1]: Startup finished in 2.197s (kernel) + 5.326s (initrd) + 4.476s (userspace) = 12.000s. Dec 16 12:17:33.362684 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:17:33.731149 kubelet[1606]: E1216 12:17:33.731027 1606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:17:33.733528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:17:33.733661 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:17:33.733974 systemd[1]: kubelet.service: Consumed 773ms CPU time, 254.9M memory peak. Dec 16 12:17:36.254602 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:17:36.257329 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:42896.service - OpenSSH per-connection server daemon (10.0.0.1:42896). Dec 16 12:17:36.324501 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 42896 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:17:36.326601 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:36.333778 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:17:36.334769 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:17:36.342696 systemd-logind[1474]: New session 1 of user core. Dec 16 12:17:36.364194 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:17:36.367372 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:17:36.382234 (systemd)[1625]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:17:36.384787 systemd-logind[1474]: New session c1 of user core. Dec 16 12:17:36.498502 systemd[1625]: Queued start job for default target default.target. Dec 16 12:17:36.505937 systemd[1625]: Created slice app.slice - User Application Slice. Dec 16 12:17:36.505969 systemd[1625]: Reached target paths.target - Paths. Dec 16 12:17:36.506007 systemd[1625]: Reached target timers.target - Timers. Dec 16 12:17:36.507204 systemd[1625]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:17:36.517195 systemd[1625]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:17:36.517269 systemd[1625]: Reached target sockets.target - Sockets. Dec 16 12:17:36.517313 systemd[1625]: Reached target basic.target - Basic System. Dec 16 12:17:36.517339 systemd[1625]: Reached target default.target - Main User Target. Dec 16 12:17:36.517364 systemd[1625]: Startup finished in 125ms. Dec 16 12:17:36.517571 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:17:36.518889 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:17:36.583452 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:42910.service - OpenSSH per-connection server daemon (10.0.0.1:42910). Dec 16 12:17:36.656725 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 42910 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:17:36.658185 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:36.662389 systemd-logind[1474]: New session 2 of user core. Dec 16 12:17:36.680054 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:17:36.731520 sshd[1639]: Connection closed by 10.0.0.1 port 42910 Dec 16 12:17:36.732051 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:36.746734 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:42910.service: Deactivated successfully. Dec 16 12:17:36.748310 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 12:17:36.749652 systemd-logind[1474]: Session 2 logged out. Waiting for processes to exit. Dec 16 12:17:36.752121 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:42922.service - OpenSSH per-connection server daemon (10.0.0.1:42922). Dec 16 12:17:36.753366 systemd-logind[1474]: Removed session 2. Dec 16 12:17:36.815864 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 42922 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:17:36.817212 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:36.822698 systemd-logind[1474]: New session 3 of user core. Dec 16 12:17:36.829017 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:17:36.876287 sshd[1648]: Connection closed by 10.0.0.1 port 42922 Dec 16 12:17:36.877037 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:36.887280 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:42922.service: Deactivated successfully. Dec 16 12:17:36.890490 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:17:36.891461 systemd-logind[1474]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:17:36.894132 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:42928.service - OpenSSH per-connection server daemon (10.0.0.1:42928). Dec 16 12:17:36.895192 systemd-logind[1474]: Removed session 3. Dec 16 12:17:36.970533 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 42928 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:17:36.971921 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:36.976712 systemd-logind[1474]: New session 4 of user core. Dec 16 12:17:36.984064 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:17:37.037577 sshd[1657]: Connection closed by 10.0.0.1 port 42928 Dec 16 12:17:37.037923 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:37.052898 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:42928.service: Deactivated successfully. Dec 16 12:17:37.054715 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:17:37.058102 systemd-logind[1474]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:17:37.059982 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:42934.service - OpenSSH per-connection server daemon (10.0.0.1:42934). Dec 16 12:17:37.061374 systemd-logind[1474]: Removed session 4. Dec 16 12:17:37.123050 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 42934 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:17:37.123535 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:37.128800 systemd-logind[1474]: New session 5 of user core. Dec 16 12:17:37.135065 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:17:37.193710 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:17:37.194026 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:17:37.208917 sudo[1667]: pam_unix(sudo:session): session closed for user root Dec 16 12:17:37.210641 sshd[1666]: Connection closed by 10.0.0.1 port 42934 Dec 16 12:17:37.211047 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:37.225461 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:42934.service: Deactivated successfully. Dec 16 12:17:37.228504 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:17:37.229412 systemd-logind[1474]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:17:37.231781 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:42942.service - OpenSSH per-connection server daemon (10.0.0.1:42942). Dec 16 12:17:37.232787 systemd-logind[1474]: Removed session 5. Dec 16 12:17:37.279070 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 42942 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:17:37.280455 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:37.285607 systemd-logind[1474]: New session 6 of user core. Dec 16 12:17:37.301056 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:17:37.352236 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:17:37.352536 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:17:37.366249 sudo[1678]: pam_unix(sudo:session): session closed for user root Dec 16 12:17:37.371603 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:17:37.372247 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:17:37.382136 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:17:37.432701 augenrules[1700]: No rules Dec 16 12:17:37.433994 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:17:37.434245 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:17:37.436041 sudo[1677]: pam_unix(sudo:session): session closed for user root Dec 16 12:17:37.437395 sshd[1676]: Connection closed by 10.0.0.1 port 42942 Dec 16 12:17:37.437757 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:37.446376 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:42942.service: Deactivated successfully. Dec 16 12:17:37.449140 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:17:37.449966 systemd-logind[1474]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:17:37.453052 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:42950.service - OpenSSH per-connection server daemon (10.0.0.1:42950). Dec 16 12:17:37.453505 systemd-logind[1474]: Removed session 6. Dec 16 12:17:37.510694 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 42950 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:17:37.512052 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:17:37.516133 systemd-logind[1474]: New session 7 of user core. Dec 16 12:17:37.528049 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:17:37.577380 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:17:37.578033 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:17:37.888485 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:17:37.911234 (dockerd)[1733]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:17:38.135053 dockerd[1733]: time="2025-12-16T12:17:38.134976787Z" level=info msg="Starting up" Dec 16 12:17:38.136197 dockerd[1733]: time="2025-12-16T12:17:38.136143562Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:17:38.148098 dockerd[1733]: time="2025-12-16T12:17:38.147978385Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:17:38.393633 dockerd[1733]: time="2025-12-16T12:17:38.393547374Z" level=info msg="Loading containers: start." Dec 16 12:17:38.427849 kernel: Initializing XFRM netlink socket Dec 16 12:17:38.687088 systemd-networkd[1429]: docker0: Link UP Dec 16 12:17:38.696653 dockerd[1733]: time="2025-12-16T12:17:38.696586034Z" level=info msg="Loading containers: done." Dec 16 12:17:38.709177 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck612205876-merged.mount: Deactivated successfully. Dec 16 12:17:38.711733 dockerd[1733]: time="2025-12-16T12:17:38.711664985Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:17:38.711869 dockerd[1733]: time="2025-12-16T12:17:38.711770039Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:17:38.711945 dockerd[1733]: time="2025-12-16T12:17:38.711898882Z" level=info msg="Initializing buildkit" Dec 16 12:17:38.743211 dockerd[1733]: time="2025-12-16T12:17:38.743162317Z" level=info msg="Completed buildkit initialization" Dec 16 12:17:38.752411 dockerd[1733]: time="2025-12-16T12:17:38.752354281Z" level=info msg="Daemon has completed initialization" Dec 16 12:17:38.752754 dockerd[1733]: time="2025-12-16T12:17:38.752516061Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:17:38.752611 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:17:39.276877 containerd[1497]: time="2025-12-16T12:17:39.276798051Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 12:17:39.892379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2968466233.mount: Deactivated successfully. Dec 16 12:17:41.023572 containerd[1497]: time="2025-12-16T12:17:41.023498518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:41.024179 containerd[1497]: time="2025-12-16T12:17:41.024136604Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=26431961" Dec 16 12:17:41.025139 containerd[1497]: time="2025-12-16T12:17:41.025080953Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:41.028465 containerd[1497]: time="2025-12-16T12:17:41.028399886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:41.029537 containerd[1497]: time="2025-12-16T12:17:41.029513188Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 1.752653564s" Dec 16 12:17:41.029593 containerd[1497]: time="2025-12-16T12:17:41.029549096Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 16 12:17:41.030359 containerd[1497]: time="2025-12-16T12:17:41.030327495Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 12:17:42.168794 containerd[1497]: time="2025-12-16T12:17:42.168738904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:42.169690 containerd[1497]: time="2025-12-16T12:17:42.169529092Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22618957" Dec 16 12:17:42.172786 containerd[1497]: time="2025-12-16T12:17:42.172737125Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:42.175802 containerd[1497]: time="2025-12-16T12:17:42.175734159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:42.176954 containerd[1497]: time="2025-12-16T12:17:42.176914219Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.146545665s" Dec 16 12:17:42.176954 containerd[1497]: time="2025-12-16T12:17:42.176949076Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 16 12:17:42.177496 containerd[1497]: time="2025-12-16T12:17:42.177471291Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 12:17:43.381763 containerd[1497]: time="2025-12-16T12:17:43.381694285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:43.383227 containerd[1497]: time="2025-12-16T12:17:43.383165742Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17618438" Dec 16 12:17:43.384680 containerd[1497]: time="2025-12-16T12:17:43.384621232Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:43.389142 containerd[1497]: time="2025-12-16T12:17:43.389067656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:43.391242 containerd[1497]: time="2025-12-16T12:17:43.391188313Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.213677161s" Dec 16 12:17:43.391242 containerd[1497]: time="2025-12-16T12:17:43.391243107Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 16 12:17:43.392041 containerd[1497]: time="2025-12-16T12:17:43.391767308Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 12:17:43.759215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:17:43.760794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:17:44.161617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:17:44.187323 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:17:44.250661 kubelet[2028]: E1216 12:17:44.250612 2028 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:17:44.253923 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:17:44.254055 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:17:44.254392 systemd[1]: kubelet.service: Consumed 167ms CPU time, 110.1M memory peak. Dec 16 12:17:44.890912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3530189137.mount: Deactivated successfully. Dec 16 12:17:45.357688 containerd[1497]: time="2025-12-16T12:17:45.357615047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:45.358353 containerd[1497]: time="2025-12-16T12:17:45.358305259Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27561801" Dec 16 12:17:45.359126 containerd[1497]: time="2025-12-16T12:17:45.359002063Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:45.362675 containerd[1497]: time="2025-12-16T12:17:45.362578285Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.970774702s" Dec 16 12:17:45.362675 containerd[1497]: time="2025-12-16T12:17:45.362620138Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 16 12:17:45.363042 containerd[1497]: time="2025-12-16T12:17:45.363018414Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 12:17:45.363209 containerd[1497]: time="2025-12-16T12:17:45.363057805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:45.925988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3315929553.mount: Deactivated successfully. Dec 16 12:17:46.579790 containerd[1497]: time="2025-12-16T12:17:46.579292996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:46.580281 containerd[1497]: time="2025-12-16T12:17:46.580202160Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Dec 16 12:17:46.581170 containerd[1497]: time="2025-12-16T12:17:46.581114187Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:46.585057 containerd[1497]: time="2025-12-16T12:17:46.584990705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:46.586320 containerd[1497]: time="2025-12-16T12:17:46.586103403Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.2229651s" Dec 16 12:17:46.586320 containerd[1497]: time="2025-12-16T12:17:46.586147560Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 16 12:17:46.586614 containerd[1497]: time="2025-12-16T12:17:46.586580223Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:17:47.034171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1660815172.mount: Deactivated successfully. Dec 16 12:17:47.043082 containerd[1497]: time="2025-12-16T12:17:47.043007283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:17:47.044359 containerd[1497]: time="2025-12-16T12:17:47.044297721Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 16 12:17:47.045494 containerd[1497]: time="2025-12-16T12:17:47.045440230Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:17:47.047650 containerd[1497]: time="2025-12-16T12:17:47.047593029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:17:47.048467 containerd[1497]: time="2025-12-16T12:17:47.048414380Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 461.797424ms" Dec 16 12:17:47.048467 containerd[1497]: time="2025-12-16T12:17:47.048456702Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 12:17:47.048975 containerd[1497]: time="2025-12-16T12:17:47.048946076Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 12:17:47.498025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070827500.mount: Deactivated successfully. Dec 16 12:17:49.123711 containerd[1497]: time="2025-12-16T12:17:49.123635185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:49.135667 containerd[1497]: time="2025-12-16T12:17:49.135614663Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Dec 16 12:17:49.136979 containerd[1497]: time="2025-12-16T12:17:49.136926395Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:49.140874 containerd[1497]: time="2025-12-16T12:17:49.140816941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:49.142520 containerd[1497]: time="2025-12-16T12:17:49.142486936Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.093506401s" Dec 16 12:17:49.142581 containerd[1497]: time="2025-12-16T12:17:49.142526127Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 16 12:17:53.641556 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:17:53.641704 systemd[1]: kubelet.service: Consumed 167ms CPU time, 110.1M memory peak. Dec 16 12:17:53.643654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:17:53.666968 systemd[1]: Reload requested from client PID 2185 ('systemctl') (unit session-7.scope)... Dec 16 12:17:53.666986 systemd[1]: Reloading... Dec 16 12:17:53.744958 zram_generator::config[2227]: No configuration found. Dec 16 12:17:54.107103 systemd[1]: Reloading finished in 439 ms. Dec 16 12:17:54.178495 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:17:54.178773 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:17:54.179165 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:17:54.179216 systemd[1]: kubelet.service: Consumed 95ms CPU time, 94.9M memory peak. Dec 16 12:17:54.180924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:17:54.434224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:17:54.439595 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:17:54.487115 kubelet[2271]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:17:54.487115 kubelet[2271]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:17:54.487115 kubelet[2271]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:17:54.487115 kubelet[2271]: I1216 12:17:54.487068 2271 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:17:55.397293 kubelet[2271]: I1216 12:17:55.397250 2271 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 12:17:55.397514 kubelet[2271]: I1216 12:17:55.397503 2271 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:17:55.397886 kubelet[2271]: I1216 12:17:55.397869 2271 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 12:17:55.436577 kubelet[2271]: E1216 12:17:55.436506 2271 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:17:55.436577 kubelet[2271]: I1216 12:17:55.436544 2271 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:17:55.444975 kubelet[2271]: I1216 12:17:55.444809 2271 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:17:55.448689 kubelet[2271]: I1216 12:17:55.448647 2271 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:17:55.449550 kubelet[2271]: I1216 12:17:55.449476 2271 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:17:55.449756 kubelet[2271]: I1216 12:17:55.449544 2271 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:17:55.449886 kubelet[2271]: I1216 12:17:55.449879 2271 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:17:55.449916 kubelet[2271]: I1216 12:17:55.449892 2271 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 12:17:55.450155 kubelet[2271]: I1216 12:17:55.450115 2271 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:17:55.453020 kubelet[2271]: I1216 12:17:55.452987 2271 kubelet.go:446] "Attempting to sync node with API server" Dec 16 12:17:55.453094 kubelet[2271]: I1216 12:17:55.453026 2271 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:17:55.453094 kubelet[2271]: I1216 12:17:55.453054 2271 kubelet.go:352] "Adding apiserver pod source" Dec 16 12:17:55.453094 kubelet[2271]: I1216 12:17:55.453071 2271 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:17:55.456112 kubelet[2271]: I1216 12:17:55.456086 2271 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:17:55.456807 kubelet[2271]: W1216 12:17:55.456738 2271 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Dec 16 12:17:55.456918 kubelet[2271]: E1216 12:17:55.456808 2271 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:17:55.457040 kubelet[2271]: I1216 12:17:55.457016 2271 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 12:17:55.457813 kubelet[2271]: W1216 12:17:55.457633 2271 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:17:55.458158 kubelet[2271]: W1216 12:17:55.457756 2271 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Dec 16 12:17:55.458245 kubelet[2271]: E1216 12:17:55.458228 2271 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:17:55.458728 kubelet[2271]: I1216 12:17:55.458695 2271 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:17:55.458790 kubelet[2271]: I1216 12:17:55.458743 2271 server.go:1287] "Started kubelet" Dec 16 12:17:55.459012 kubelet[2271]: I1216 12:17:55.458978 2271 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:17:55.462508 kubelet[2271]: I1216 12:17:55.462461 2271 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:17:55.463661 kubelet[2271]: I1216 12:17:55.463638 2271 server.go:479] "Adding debug handlers to kubelet server" Dec 16 12:17:55.465505 kubelet[2271]: I1216 12:17:55.463688 2271 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:17:55.465505 kubelet[2271]: I1216 12:17:55.465257 2271 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:17:55.465505 kubelet[2271]: I1216 12:17:55.465316 2271 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:17:55.466047 kubelet[2271]: I1216 12:17:55.466023 2271 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:17:55.467614 kubelet[2271]: I1216 12:17:55.467583 2271 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:17:55.467695 kubelet[2271]: E1216 12:17:55.467632 2271 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:17:55.467730 kubelet[2271]: I1216 12:17:55.467705 2271 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:17:55.468409 kubelet[2271]: I1216 12:17:55.468382 2271 factory.go:221] Registration of the systemd container factory successfully Dec 16 12:17:55.470002 kubelet[2271]: I1216 12:17:55.469964 2271 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:17:55.470548 kubelet[2271]: E1216 12:17:55.468902 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="200ms" Dec 16 12:17:55.470727 kubelet[2271]: E1216 12:17:55.466174 2271 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b14d88072c5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 12:17:55.458714715 +0000 UTC m=+1.015519240,LastTimestamp:2025-12-16 12:17:55.458714715 +0000 UTC m=+1.015519240,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 12:17:55.472334 kubelet[2271]: W1216 12:17:55.472262 2271 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Dec 16 12:17:55.472437 kubelet[2271]: E1216 12:17:55.472419 2271 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:17:55.476004 kubelet[2271]: I1216 12:17:55.475960 2271 factory.go:221] Registration of the containerd container factory successfully Dec 16 12:17:55.489803 kubelet[2271]: I1216 12:17:55.489773 2271 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:17:55.489803 kubelet[2271]: I1216 12:17:55.489792 2271 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:17:55.489803 kubelet[2271]: I1216 12:17:55.489815 2271 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:17:55.560867 kubelet[2271]: I1216 12:17:55.560535 2271 policy_none.go:49] "None policy: Start" Dec 16 12:17:55.560867 kubelet[2271]: I1216 12:17:55.560573 2271 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:17:55.560867 kubelet[2271]: I1216 12:17:55.560593 2271 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:17:55.566101 kubelet[2271]: I1216 12:17:55.565809 2271 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 12:17:55.569955 kubelet[2271]: I1216 12:17:55.567541 2271 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 12:17:55.569955 kubelet[2271]: I1216 12:17:55.567582 2271 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 12:17:55.569955 kubelet[2271]: I1216 12:17:55.567606 2271 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:17:55.569955 kubelet[2271]: I1216 12:17:55.567629 2271 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 12:17:55.569955 kubelet[2271]: E1216 12:17:55.567674 2271 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:17:55.569955 kubelet[2271]: E1216 12:17:55.567681 2271 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:17:55.569955 kubelet[2271]: W1216 12:17:55.569729 2271 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Dec 16 12:17:55.569955 kubelet[2271]: E1216 12:17:55.569783 2271 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:17:55.581605 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:17:55.620130 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:17:55.624060 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:17:55.650818 kubelet[2271]: I1216 12:17:55.649915 2271 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 12:17:55.650818 kubelet[2271]: I1216 12:17:55.650370 2271 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:17:55.650818 kubelet[2271]: I1216 12:17:55.650399 2271 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:17:55.652756 kubelet[2271]: I1216 12:17:55.651353 2271 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:17:55.653783 kubelet[2271]: E1216 12:17:55.653725 2271 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:17:55.653950 kubelet[2271]: E1216 12:17:55.653928 2271 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 12:17:55.669257 kubelet[2271]: I1216 12:17:55.669210 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9bb7981a71739eb671b965328aa95fd2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9bb7981a71739eb671b965328aa95fd2\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:17:55.670429 kubelet[2271]: I1216 12:17:55.670392 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9bb7981a71739eb671b965328aa95fd2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9bb7981a71739eb671b965328aa95fd2\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:17:55.670529 kubelet[2271]: I1216 12:17:55.670458 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9bb7981a71739eb671b965328aa95fd2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9bb7981a71739eb671b965328aa95fd2\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:17:55.671309 kubelet[2271]: E1216 12:17:55.671277 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="400ms" Dec 16 12:17:55.683632 systemd[1]: Created slice kubepods-burstable-pod9bb7981a71739eb671b965328aa95fd2.slice - libcontainer container kubepods-burstable-pod9bb7981a71739eb671b965328aa95fd2.slice. Dec 16 12:17:55.696959 kubelet[2271]: E1216 12:17:55.696907 2271 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:17:55.699898 systemd[1]: Created slice kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice - libcontainer container kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice. Dec 16 12:17:55.708931 kubelet[2271]: E1216 12:17:55.708695 2271 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:17:55.711359 systemd[1]: Created slice kubepods-burstable-pod0a68423804124305a9de061f38780871.slice - libcontainer container kubepods-burstable-pod0a68423804124305a9de061f38780871.slice. Dec 16 12:17:55.713173 kubelet[2271]: E1216 12:17:55.713140 2271 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:17:55.754350 kubelet[2271]: I1216 12:17:55.754303 2271 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:17:55.754930 kubelet[2271]: E1216 12:17:55.754897 2271 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Dec 16 12:17:55.772467 kubelet[2271]: I1216 12:17:55.772378 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:17:55.772467 kubelet[2271]: I1216 12:17:55.772421 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:17:55.772467 kubelet[2271]: I1216 12:17:55.772444 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:17:55.772467 kubelet[2271]: I1216 12:17:55.772477 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:17:55.772666 kubelet[2271]: I1216 12:17:55.772502 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:17:55.772666 kubelet[2271]: I1216 12:17:55.772516 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:17:55.958547 kubelet[2271]: I1216 12:17:55.958008 2271 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:17:55.958547 kubelet[2271]: E1216 12:17:55.958375 2271 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Dec 16 12:17:55.999624 containerd[1497]: time="2025-12-16T12:17:55.998979457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9bb7981a71739eb671b965328aa95fd2,Namespace:kube-system,Attempt:0,}" Dec 16 12:17:56.009851 containerd[1497]: time="2025-12-16T12:17:56.009795714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,}" Dec 16 12:17:56.014860 containerd[1497]: time="2025-12-16T12:17:56.014802620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,}" Dec 16 12:17:56.024065 containerd[1497]: time="2025-12-16T12:17:56.024017573Z" level=info msg="connecting to shim 05ca3595ccbc39d5dda7e26c000ceb5516d015354e81ab75e2aadac15844d835" address="unix:///run/containerd/s/de23620461be2986f0ed043fa960e9040ba6fdf3ec5fa7c492cc328d8ca4b6d9" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:17:56.051536 containerd[1497]: time="2025-12-16T12:17:56.051261466Z" level=info msg="connecting to shim 03543d9d80df11a4e94cc7c49ee6b2bad567c425416f3e739e95d1216343ac07" address="unix:///run/containerd/s/b8a3271918c544bc4dad0d932e681a02e26817a92e82a57abc41f68bde08bd81" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:17:56.056734 containerd[1497]: time="2025-12-16T12:17:56.056673131Z" level=info msg="connecting to shim f4ff18f40c4cfda5b1a096fe5715aa54675d30580d32ee9bd450210bff07a63e" address="unix:///run/containerd/s/32dd47a4db854c417cd4f1fe8864d32cafc204f47ca27017da93bbee47b2d234" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:17:56.058049 systemd[1]: Started cri-containerd-05ca3595ccbc39d5dda7e26c000ceb5516d015354e81ab75e2aadac15844d835.scope - libcontainer container 05ca3595ccbc39d5dda7e26c000ceb5516d015354e81ab75e2aadac15844d835. Dec 16 12:17:56.072627 kubelet[2271]: E1216 12:17:56.072389 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="800ms" Dec 16 12:17:56.088226 systemd[1]: Started cri-containerd-03543d9d80df11a4e94cc7c49ee6b2bad567c425416f3e739e95d1216343ac07.scope - libcontainer container 03543d9d80df11a4e94cc7c49ee6b2bad567c425416f3e739e95d1216343ac07. Dec 16 12:17:56.090215 systemd[1]: Started cri-containerd-f4ff18f40c4cfda5b1a096fe5715aa54675d30580d32ee9bd450210bff07a63e.scope - libcontainer container f4ff18f40c4cfda5b1a096fe5715aa54675d30580d32ee9bd450210bff07a63e. Dec 16 12:17:56.115603 containerd[1497]: time="2025-12-16T12:17:56.115475148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9bb7981a71739eb671b965328aa95fd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"05ca3595ccbc39d5dda7e26c000ceb5516d015354e81ab75e2aadac15844d835\"" Dec 16 12:17:56.120604 containerd[1497]: time="2025-12-16T12:17:56.120525660Z" level=info msg="CreateContainer within sandbox \"05ca3595ccbc39d5dda7e26c000ceb5516d015354e81ab75e2aadac15844d835\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:17:56.136845 containerd[1497]: time="2025-12-16T12:17:56.136797553Z" level=info msg="Container c891c42b98f1756359ee24e1ccc4fa7c32893fa8c180c9ced2c77fa88a2835b7: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:17:56.138506 containerd[1497]: time="2025-12-16T12:17:56.138462834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4ff18f40c4cfda5b1a096fe5715aa54675d30580d32ee9bd450210bff07a63e\"" Dec 16 12:17:56.140712 containerd[1497]: time="2025-12-16T12:17:56.140666730Z" level=info msg="CreateContainer within sandbox \"f4ff18f40c4cfda5b1a096fe5715aa54675d30580d32ee9bd450210bff07a63e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:17:56.142267 containerd[1497]: time="2025-12-16T12:17:56.142204306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,} returns sandbox id \"03543d9d80df11a4e94cc7c49ee6b2bad567c425416f3e739e95d1216343ac07\"" Dec 16 12:17:56.146942 containerd[1497]: time="2025-12-16T12:17:56.146903489Z" level=info msg="CreateContainer within sandbox \"03543d9d80df11a4e94cc7c49ee6b2bad567c425416f3e739e95d1216343ac07\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:17:56.154326 containerd[1497]: time="2025-12-16T12:17:56.154249503Z" level=info msg="CreateContainer within sandbox \"05ca3595ccbc39d5dda7e26c000ceb5516d015354e81ab75e2aadac15844d835\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c891c42b98f1756359ee24e1ccc4fa7c32893fa8c180c9ced2c77fa88a2835b7\"" Dec 16 12:17:56.154995 containerd[1497]: time="2025-12-16T12:17:56.154911590Z" level=info msg="StartContainer for \"c891c42b98f1756359ee24e1ccc4fa7c32893fa8c180c9ced2c77fa88a2835b7\"" Dec 16 12:17:56.156286 containerd[1497]: time="2025-12-16T12:17:56.156127826Z" level=info msg="connecting to shim c891c42b98f1756359ee24e1ccc4fa7c32893fa8c180c9ced2c77fa88a2835b7" address="unix:///run/containerd/s/de23620461be2986f0ed043fa960e9040ba6fdf3ec5fa7c492cc328d8ca4b6d9" protocol=ttrpc version=3 Dec 16 12:17:56.160948 containerd[1497]: time="2025-12-16T12:17:56.160900725Z" level=info msg="Container 1f53e2ffc6a24483e91a0a68f6b9f8265ff54cd051a8489649bafeae41912bab: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:17:56.165900 containerd[1497]: time="2025-12-16T12:17:56.165858153Z" level=info msg="Container a17598021b8d3bcf106eef341d0c1f0ef115c2a908014178d89ce1862ff8b0e2: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:17:56.176578 containerd[1497]: time="2025-12-16T12:17:56.176532054Z" level=info msg="CreateContainer within sandbox \"f4ff18f40c4cfda5b1a096fe5715aa54675d30580d32ee9bd450210bff07a63e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1f53e2ffc6a24483e91a0a68f6b9f8265ff54cd051a8489649bafeae41912bab\"" Dec 16 12:17:56.177303 containerd[1497]: time="2025-12-16T12:17:56.177247332Z" level=info msg="StartContainer for \"1f53e2ffc6a24483e91a0a68f6b9f8265ff54cd051a8489649bafeae41912bab\"" Dec 16 12:17:56.179391 containerd[1497]: time="2025-12-16T12:17:56.179359343Z" level=info msg="connecting to shim 1f53e2ffc6a24483e91a0a68f6b9f8265ff54cd051a8489649bafeae41912bab" address="unix:///run/containerd/s/32dd47a4db854c417cd4f1fe8864d32cafc204f47ca27017da93bbee47b2d234" protocol=ttrpc version=3 Dec 16 12:17:56.179782 containerd[1497]: time="2025-12-16T12:17:56.179687911Z" level=info msg="CreateContainer within sandbox \"03543d9d80df11a4e94cc7c49ee6b2bad567c425416f3e739e95d1216343ac07\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a17598021b8d3bcf106eef341d0c1f0ef115c2a908014178d89ce1862ff8b0e2\"" Dec 16 12:17:56.180180 containerd[1497]: time="2025-12-16T12:17:56.180143705Z" level=info msg="StartContainer for \"a17598021b8d3bcf106eef341d0c1f0ef115c2a908014178d89ce1862ff8b0e2\"" Dec 16 12:17:56.181387 containerd[1497]: time="2025-12-16T12:17:56.181346284Z" level=info msg="connecting to shim a17598021b8d3bcf106eef341d0c1f0ef115c2a908014178d89ce1862ff8b0e2" address="unix:///run/containerd/s/b8a3271918c544bc4dad0d932e681a02e26817a92e82a57abc41f68bde08bd81" protocol=ttrpc version=3 Dec 16 12:17:56.184048 systemd[1]: Started cri-containerd-c891c42b98f1756359ee24e1ccc4fa7c32893fa8c180c9ced2c77fa88a2835b7.scope - libcontainer container c891c42b98f1756359ee24e1ccc4fa7c32893fa8c180c9ced2c77fa88a2835b7. Dec 16 12:17:56.209081 systemd[1]: Started cri-containerd-1f53e2ffc6a24483e91a0a68f6b9f8265ff54cd051a8489649bafeae41912bab.scope - libcontainer container 1f53e2ffc6a24483e91a0a68f6b9f8265ff54cd051a8489649bafeae41912bab. Dec 16 12:17:56.211037 systemd[1]: Started cri-containerd-a17598021b8d3bcf106eef341d0c1f0ef115c2a908014178d89ce1862ff8b0e2.scope - libcontainer container a17598021b8d3bcf106eef341d0c1f0ef115c2a908014178d89ce1862ff8b0e2. Dec 16 12:17:56.251546 containerd[1497]: time="2025-12-16T12:17:56.251465760Z" level=info msg="StartContainer for \"c891c42b98f1756359ee24e1ccc4fa7c32893fa8c180c9ced2c77fa88a2835b7\" returns successfully" Dec 16 12:17:56.269901 containerd[1497]: time="2025-12-16T12:17:56.269761531Z" level=info msg="StartContainer for \"a17598021b8d3bcf106eef341d0c1f0ef115c2a908014178d89ce1862ff8b0e2\" returns successfully" Dec 16 12:17:56.270436 containerd[1497]: time="2025-12-16T12:17:56.270310050Z" level=info msg="StartContainer for \"1f53e2ffc6a24483e91a0a68f6b9f8265ff54cd051a8489649bafeae41912bab\" returns successfully" Dec 16 12:17:56.360051 kubelet[2271]: I1216 12:17:56.360005 2271 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:17:56.577480 kubelet[2271]: E1216 12:17:56.577378 2271 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:17:56.581640 kubelet[2271]: E1216 12:17:56.581416 2271 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:17:56.583751 kubelet[2271]: E1216 12:17:56.583696 2271 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:17:57.591849 kubelet[2271]: E1216 12:17:57.591750 2271 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:17:57.593938 kubelet[2271]: E1216 12:17:57.593917 2271 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:17:57.681212 kubelet[2271]: E1216 12:17:57.681164 2271 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 16 12:17:57.793814 kubelet[2271]: I1216 12:17:57.793740 2271 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:17:57.793814 kubelet[2271]: E1216 12:17:57.793776 2271 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 16 12:17:57.868639 kubelet[2271]: I1216 12:17:57.868322 2271 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:17:57.876058 kubelet[2271]: E1216 12:17:57.876012 2271 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 12:17:57.876403 kubelet[2271]: I1216 12:17:57.876203 2271 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:17:57.878252 kubelet[2271]: E1216 12:17:57.878229 2271 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:17:57.878435 kubelet[2271]: I1216 12:17:57.878336 2271 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:17:57.880296 kubelet[2271]: E1216 12:17:57.880267 2271 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 12:17:58.458385 kubelet[2271]: I1216 12:17:58.458263 2271 apiserver.go:52] "Watching apiserver" Dec 16 12:17:58.467765 kubelet[2271]: I1216 12:17:58.467717 2271 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:18:00.405911 systemd[1]: Reload requested from client PID 2551 ('systemctl') (unit session-7.scope)... Dec 16 12:18:00.406013 systemd[1]: Reloading... Dec 16 12:18:00.502876 zram_generator::config[2595]: No configuration found. Dec 16 12:18:00.687416 systemd[1]: Reloading finished in 280 ms. Dec 16 12:18:00.713680 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:18:00.730638 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:18:00.731124 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:18:00.731183 systemd[1]: kubelet.service: Consumed 1.438s CPU time, 129.9M memory peak. Dec 16 12:18:00.733950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:18:00.864385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:18:00.870496 (kubelet)[2636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:18:00.909943 kubelet[2636]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:18:00.909943 kubelet[2636]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:18:00.909943 kubelet[2636]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:18:00.910291 kubelet[2636]: I1216 12:18:00.909994 2636 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:18:00.920861 kubelet[2636]: I1216 12:18:00.920536 2636 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 12:18:00.920861 kubelet[2636]: I1216 12:18:00.920570 2636 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:18:00.922102 kubelet[2636]: I1216 12:18:00.922082 2636 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 12:18:00.923475 kubelet[2636]: I1216 12:18:00.923449 2636 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 12:18:00.926245 kubelet[2636]: I1216 12:18:00.925984 2636 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:18:00.930260 kubelet[2636]: I1216 12:18:00.930215 2636 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:18:00.932892 kubelet[2636]: I1216 12:18:00.932867 2636 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:18:00.933091 kubelet[2636]: I1216 12:18:00.933062 2636 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:18:00.933316 kubelet[2636]: I1216 12:18:00.933088 2636 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:18:00.933316 kubelet[2636]: I1216 12:18:00.933315 2636 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:18:00.933423 kubelet[2636]: I1216 12:18:00.933327 2636 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 12:18:00.933489 kubelet[2636]: I1216 12:18:00.933470 2636 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:18:00.933631 kubelet[2636]: I1216 12:18:00.933619 2636 kubelet.go:446] "Attempting to sync node with API server" Dec 16 12:18:00.933662 kubelet[2636]: I1216 12:18:00.933637 2636 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:18:00.933662 kubelet[2636]: I1216 12:18:00.933659 2636 kubelet.go:352] "Adding apiserver pod source" Dec 16 12:18:00.933711 kubelet[2636]: I1216 12:18:00.933670 2636 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:18:00.934362 kubelet[2636]: I1216 12:18:00.934260 2636 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:18:00.936559 kubelet[2636]: I1216 12:18:00.936431 2636 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 12:18:00.938517 kubelet[2636]: I1216 12:18:00.937819 2636 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:18:00.938517 kubelet[2636]: I1216 12:18:00.937969 2636 server.go:1287] "Started kubelet" Dec 16 12:18:00.938881 kubelet[2636]: I1216 12:18:00.938666 2636 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:18:00.940407 kubelet[2636]: I1216 12:18:00.940379 2636 server.go:479] "Adding debug handlers to kubelet server" Dec 16 12:18:00.940627 kubelet[2636]: I1216 12:18:00.940561 2636 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:18:00.941541 kubelet[2636]: I1216 12:18:00.940451 2636 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:18:00.947746 kubelet[2636]: E1216 12:18:00.947705 2636 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:18:00.948277 kubelet[2636]: I1216 12:18:00.939235 2636 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:18:00.948732 kubelet[2636]: I1216 12:18:00.948688 2636 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:18:00.949951 kubelet[2636]: I1216 12:18:00.949844 2636 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:18:00.949994 kubelet[2636]: I1216 12:18:00.949961 2636 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:18:00.950146 kubelet[2636]: I1216 12:18:00.950067 2636 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:18:00.954590 kubelet[2636]: I1216 12:18:00.954556 2636 factory.go:221] Registration of the systemd container factory successfully Dec 16 12:18:00.956246 kubelet[2636]: I1216 12:18:00.956174 2636 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:18:00.958557 kubelet[2636]: E1216 12:18:00.957694 2636 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:18:00.959176 kubelet[2636]: I1216 12:18:00.959145 2636 factory.go:221] Registration of the containerd container factory successfully Dec 16 12:18:00.959707 kubelet[2636]: I1216 12:18:00.959680 2636 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 12:18:00.961097 kubelet[2636]: I1216 12:18:00.961022 2636 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 12:18:00.961097 kubelet[2636]: I1216 12:18:00.961051 2636 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 12:18:00.961097 kubelet[2636]: I1216 12:18:00.961069 2636 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:18:00.961097 kubelet[2636]: I1216 12:18:00.961085 2636 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 12:18:00.961226 kubelet[2636]: E1216 12:18:00.961123 2636 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:18:00.995954 kubelet[2636]: I1216 12:18:00.995926 2636 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:18:00.995954 kubelet[2636]: I1216 12:18:00.995946 2636 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:18:00.996111 kubelet[2636]: I1216 12:18:00.995969 2636 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:18:00.996163 kubelet[2636]: I1216 12:18:00.996142 2636 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:18:00.996193 kubelet[2636]: I1216 12:18:00.996158 2636 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:18:00.996193 kubelet[2636]: I1216 12:18:00.996178 2636 policy_none.go:49] "None policy: Start" Dec 16 12:18:00.996193 kubelet[2636]: I1216 12:18:00.996187 2636 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:18:00.996270 kubelet[2636]: I1216 12:18:00.996196 2636 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:18:00.996369 kubelet[2636]: I1216 12:18:00.996306 2636 state_mem.go:75] "Updated machine memory state" Dec 16 12:18:01.000335 kubelet[2636]: I1216 12:18:01.000288 2636 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 12:18:01.000489 kubelet[2636]: I1216 12:18:01.000472 2636 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:18:01.000519 kubelet[2636]: I1216 12:18:01.000491 2636 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:18:01.001279 kubelet[2636]: I1216 12:18:01.001144 2636 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:18:01.001965 kubelet[2636]: E1216 12:18:01.001945 2636 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:18:01.062882 kubelet[2636]: I1216 12:18:01.062537 2636 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:18:01.062882 kubelet[2636]: I1216 12:18:01.062633 2636 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:18:01.063030 kubelet[2636]: I1216 12:18:01.062966 2636 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:18:01.102962 kubelet[2636]: I1216 12:18:01.102932 2636 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:18:01.110781 kubelet[2636]: I1216 12:18:01.110675 2636 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 12:18:01.111014 kubelet[2636]: I1216 12:18:01.111001 2636 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:18:01.151649 kubelet[2636]: I1216 12:18:01.151589 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:18:01.151649 kubelet[2636]: I1216 12:18:01.151632 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:18:01.151817 kubelet[2636]: I1216 12:18:01.151661 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:18:01.151817 kubelet[2636]: I1216 12:18:01.151680 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9bb7981a71739eb671b965328aa95fd2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9bb7981a71739eb671b965328aa95fd2\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:18:01.151817 kubelet[2636]: I1216 12:18:01.151734 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9bb7981a71739eb671b965328aa95fd2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9bb7981a71739eb671b965328aa95fd2\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:18:01.151817 kubelet[2636]: I1216 12:18:01.151772 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9bb7981a71739eb671b965328aa95fd2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9bb7981a71739eb671b965328aa95fd2\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:18:01.151945 kubelet[2636]: I1216 12:18:01.151816 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:18:01.151945 kubelet[2636]: I1216 12:18:01.151862 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:18:01.151945 kubelet[2636]: I1216 12:18:01.151880 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:18:01.403736 sudo[2670]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 16 12:18:01.404049 sudo[2670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 16 12:18:01.733444 sudo[2670]: pam_unix(sudo:session): session closed for user root Dec 16 12:18:01.934601 kubelet[2636]: I1216 12:18:01.934553 2636 apiserver.go:52] "Watching apiserver" Dec 16 12:18:01.950555 kubelet[2636]: I1216 12:18:01.950523 2636 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:18:01.979284 kubelet[2636]: I1216 12:18:01.979240 2636 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:18:01.980953 kubelet[2636]: I1216 12:18:01.979326 2636 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:18:01.980953 kubelet[2636]: I1216 12:18:01.979563 2636 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:18:01.987715 kubelet[2636]: E1216 12:18:01.987208 2636 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:18:01.987813 kubelet[2636]: E1216 12:18:01.987771 2636 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 12:18:01.988299 kubelet[2636]: E1216 12:18:01.988274 2636 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 12:18:02.000628 kubelet[2636]: I1216 12:18:01.999599 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.999568125 podStartE2EDuration="999.568125ms" podCreationTimestamp="2025-12-16 12:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:18:01.999018758 +0000 UTC m=+1.125412613" watchObservedRunningTime="2025-12-16 12:18:01.999568125 +0000 UTC m=+1.125961980" Dec 16 12:18:02.017445 kubelet[2636]: I1216 12:18:02.017359 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.017340329 podStartE2EDuration="1.017340329s" podCreationTimestamp="2025-12-16 12:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:18:02.008200616 +0000 UTC m=+1.134594471" watchObservedRunningTime="2025-12-16 12:18:02.017340329 +0000 UTC m=+1.143734184" Dec 16 12:18:02.027650 kubelet[2636]: I1216 12:18:02.027578 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.027559628 podStartE2EDuration="1.027559628s" podCreationTimestamp="2025-12-16 12:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:18:02.018717011 +0000 UTC m=+1.145110866" watchObservedRunningTime="2025-12-16 12:18:02.027559628 +0000 UTC m=+1.153953483" Dec 16 12:18:03.554258 sudo[1713]: pam_unix(sudo:session): session closed for user root Dec 16 12:18:03.555819 sshd[1712]: Connection closed by 10.0.0.1 port 42950 Dec 16 12:18:03.556359 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:03.559975 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:42950.service: Deactivated successfully. Dec 16 12:18:03.561907 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:18:03.562154 systemd[1]: session-7.scope: Consumed 6.640s CPU time, 259.5M memory peak. Dec 16 12:18:03.563048 systemd-logind[1474]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:18:03.564127 systemd-logind[1474]: Removed session 7. Dec 16 12:18:05.023578 kubelet[2636]: I1216 12:18:05.022011 2636 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:18:05.023578 kubelet[2636]: I1216 12:18:05.023035 2636 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:18:05.023941 containerd[1497]: time="2025-12-16T12:18:05.022821019Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:18:05.716193 systemd[1]: Created slice kubepods-burstable-pod44f78a6c_b473_4194_bec2_350576799125.slice - libcontainer container kubepods-burstable-pod44f78a6c_b473_4194_bec2_350576799125.slice. Dec 16 12:18:05.722442 systemd[1]: Created slice kubepods-besteffort-podd23f5a28_6365_412e_a54e_2ab6e763277d.slice - libcontainer container kubepods-besteffort-podd23f5a28_6365_412e_a54e_2ab6e763277d.slice. Dec 16 12:18:05.776916 kubelet[2636]: I1216 12:18:05.776872 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-hostproc\") pod \"cilium-6vl2q\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " pod="kube-system/cilium-6vl2q" Dec 16 12:18:05.776916 kubelet[2636]: I1216 12:18:05.776915 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-etc-cni-netd\") pod \"cilium-6vl2q\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " pod="kube-system/cilium-6vl2q" Dec 16 12:18:05.777151 kubelet[2636]: I1216 12:18:05.776939 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44f78a6c-b473-4194-bec2-350576799125-hubble-tls\") pod \"cilium-6vl2q\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " pod="kube-system/cilium-6vl2q" Dec 16 12:18:05.777151 kubelet[2636]: I1216 12:18:05.776955 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-host-proc-sys-net\") pod \"cilium-6vl2q\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " pod="kube-system/cilium-6vl2q" Dec 16 12:18:05.777151 kubelet[2636]: I1216 12:18:05.776971 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-host-proc-sys-kernel\") pod \"cilium-6vl2q\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " pod="kube-system/cilium-6vl2q" Dec 16 12:18:05.777151 kubelet[2636]: I1216 12:18:05.776993 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-cilium-cgroup\") pod \"cilium-6vl2q\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " pod="kube-system/cilium-6vl2q" Dec 16 12:18:05.777151 kubelet[2636]: I1216 12:18:05.777008 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d23f5a28-6365-412e-a54e-2ab6e763277d-xtables-lock\") pod \"kube-proxy-dszqk\" (UID: \"d23f5a28-6365-412e-a54e-2ab6e763277d\") " pod="kube-system/kube-proxy-dszqk" Dec 16 12:18:05.777260 kubelet[2636]: I1216 12:18:05.777035 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mbgs\" (UniqueName: \"kubernetes.io/projected/d23f5a28-6365-412e-a54e-2ab6e763277d-kube-api-access-2mbgs\") pod \"kube-proxy-dszqk\" (UID: \"d23f5a28-6365-412e-a54e-2ab6e763277d\") " pod="kube-system/kube-proxy-dszqk" Dec 16 12:18:05.777260 kubelet[2636]: I1216 12:18:05.777052 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-cilium-run\") pod \"cilium-6vl2q\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " pod="kube-system/cilium-6vl2q" Dec 16 12:18:05.777260 kubelet[2636]: I1216 12:18:05.777068 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-bpf-maps\") pod \"cilium-6vl2q\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " pod="kube-system/cilium-6vl2q" Dec 16 12:18:05.777260 kubelet[2636]: I1216 12:18:05.777082 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-cni-path\") pod \"cilium-6vl2q\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " pod="kube-system/cilium-6vl2q" Dec 16 12:18:05.777260 kubelet[2636]: I1216 12:18:05.777096 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44f78a6c-b473-4194-bec2-350576799125-clustermesh-secrets\") pod \"cilium-6vl2q\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " pod="kube-system/cilium-6vl2q" Dec 16 12:18:05.777260 kubelet[2636]: I1216 12:18:05.777110 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-lib-modules\") pod \"cilium-6vl2q\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " pod="kube-system/cilium-6vl2q" Dec 16 12:18:05.777373 kubelet[2636]: I1216 12:18:05.777125 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-xtables-lock\") pod \"cilium-6vl2q\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " pod="kube-system/cilium-6vl2q" Dec 16 12:18:05.777373 kubelet[2636]: I1216 12:18:05.777143 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44f78a6c-b473-4194-bec2-350576799125-cilium-config-path\") pod \"cilium-6vl2q\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " pod="kube-system/cilium-6vl2q" Dec 16 12:18:05.777373 kubelet[2636]: I1216 12:18:05.777191 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jdlp\" (UniqueName: \"kubernetes.io/projected/44f78a6c-b473-4194-bec2-350576799125-kube-api-access-6jdlp\") pod \"cilium-6vl2q\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " pod="kube-system/cilium-6vl2q" Dec 16 12:18:05.777373 kubelet[2636]: I1216 12:18:05.777229 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d23f5a28-6365-412e-a54e-2ab6e763277d-lib-modules\") pod \"kube-proxy-dszqk\" (UID: \"d23f5a28-6365-412e-a54e-2ab6e763277d\") " pod="kube-system/kube-proxy-dszqk" Dec 16 12:18:05.777373 kubelet[2636]: I1216 12:18:05.777253 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d23f5a28-6365-412e-a54e-2ab6e763277d-kube-proxy\") pod \"kube-proxy-dszqk\" (UID: \"d23f5a28-6365-412e-a54e-2ab6e763277d\") " pod="kube-system/kube-proxy-dszqk" Dec 16 12:18:05.897006 kubelet[2636]: E1216 12:18:05.896964 2636 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 12:18:05.897006 kubelet[2636]: E1216 12:18:05.896998 2636 projected.go:194] Error preparing data for projected volume kube-api-access-2mbgs for pod kube-system/kube-proxy-dszqk: configmap "kube-root-ca.crt" not found Dec 16 12:18:05.897182 kubelet[2636]: E1216 12:18:05.897081 2636 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d23f5a28-6365-412e-a54e-2ab6e763277d-kube-api-access-2mbgs podName:d23f5a28-6365-412e-a54e-2ab6e763277d nodeName:}" failed. No retries permitted until 2025-12-16 12:18:06.397059903 +0000 UTC m=+5.523453718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2mbgs" (UniqueName: "kubernetes.io/projected/d23f5a28-6365-412e-a54e-2ab6e763277d-kube-api-access-2mbgs") pod "kube-proxy-dszqk" (UID: "d23f5a28-6365-412e-a54e-2ab6e763277d") : configmap "kube-root-ca.crt" not found Dec 16 12:18:05.899241 kubelet[2636]: E1216 12:18:05.899192 2636 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 12:18:05.899241 kubelet[2636]: E1216 12:18:05.899223 2636 projected.go:194] Error preparing data for projected volume kube-api-access-6jdlp for pod kube-system/cilium-6vl2q: configmap "kube-root-ca.crt" not found Dec 16 12:18:05.899434 kubelet[2636]: E1216 12:18:05.899284 2636 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/44f78a6c-b473-4194-bec2-350576799125-kube-api-access-6jdlp podName:44f78a6c-b473-4194-bec2-350576799125 nodeName:}" failed. No retries permitted until 2025-12-16 12:18:06.399265748 +0000 UTC m=+5.525659603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6jdlp" (UniqueName: "kubernetes.io/projected/44f78a6c-b473-4194-bec2-350576799125-kube-api-access-6jdlp") pod "cilium-6vl2q" (UID: "44f78a6c-b473-4194-bec2-350576799125") : configmap "kube-root-ca.crt" not found Dec 16 12:18:06.158094 systemd[1]: Created slice kubepods-besteffort-pod77a3d05c_dc8c_40bd_ab91_8f3b25fd62c0.slice - libcontainer container kubepods-besteffort-pod77a3d05c_dc8c_40bd_ab91_8f3b25fd62c0.slice. Dec 16 12:18:06.180168 kubelet[2636]: I1216 12:18:06.180113 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xwjvb\" (UID: \"77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0\") " pod="kube-system/cilium-operator-6c4d7847fc-xwjvb" Dec 16 12:18:06.180168 kubelet[2636]: I1216 12:18:06.180173 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvpsn\" (UniqueName: \"kubernetes.io/projected/77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0-kube-api-access-lvpsn\") pod \"cilium-operator-6c4d7847fc-xwjvb\" (UID: \"77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0\") " pod="kube-system/cilium-operator-6c4d7847fc-xwjvb" Dec 16 12:18:06.571381 containerd[1497]: time="2025-12-16T12:18:06.463624014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xwjvb,Uid:77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0,Namespace:kube-system,Attempt:0,}" Dec 16 12:18:06.621893 containerd[1497]: time="2025-12-16T12:18:06.621750068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6vl2q,Uid:44f78a6c-b473-4194-bec2-350576799125,Namespace:kube-system,Attempt:0,}" Dec 16 12:18:06.634746 containerd[1497]: time="2025-12-16T12:18:06.634514690Z" level=info msg="connecting to shim 1f38d2323703bc2b40d9d942a1465d17bdc59e0d4a5311c2a12596ef790156e0" address="unix:///run/containerd/s/497b7d73ed6465aadf516c8c6b196f41107e1de392a23e87c24eac7a937bc760" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:18:06.636608 containerd[1497]: time="2025-12-16T12:18:06.636565242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dszqk,Uid:d23f5a28-6365-412e-a54e-2ab6e763277d,Namespace:kube-system,Attempt:0,}" Dec 16 12:18:06.649080 containerd[1497]: time="2025-12-16T12:18:06.649026886Z" level=info msg="connecting to shim 91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d" address="unix:///run/containerd/s/a480a53e3747d37aad564a28320177b9001e3b2f26072ebb80955ffd9d9b9c22" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:18:06.664286 containerd[1497]: time="2025-12-16T12:18:06.664157444Z" level=info msg="connecting to shim 1d80a7016aeaf2acf74a79181ef7bcd28496d5eec798bb859033d59019c4dbe8" address="unix:///run/containerd/s/47b79c6e1f6ce86cb21d24355bff7c468d55df71c18e45090de006110b852bed" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:18:06.668072 systemd[1]: Started cri-containerd-1f38d2323703bc2b40d9d942a1465d17bdc59e0d4a5311c2a12596ef790156e0.scope - libcontainer container 1f38d2323703bc2b40d9d942a1465d17bdc59e0d4a5311c2a12596ef790156e0. Dec 16 12:18:06.673585 systemd[1]: Started cri-containerd-91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d.scope - libcontainer container 91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d. Dec 16 12:18:06.703059 systemd[1]: Started cri-containerd-1d80a7016aeaf2acf74a79181ef7bcd28496d5eec798bb859033d59019c4dbe8.scope - libcontainer container 1d80a7016aeaf2acf74a79181ef7bcd28496d5eec798bb859033d59019c4dbe8. Dec 16 12:18:06.725788 containerd[1497]: time="2025-12-16T12:18:06.725717335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xwjvb,Uid:77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f38d2323703bc2b40d9d942a1465d17bdc59e0d4a5311c2a12596ef790156e0\"" Dec 16 12:18:06.729857 containerd[1497]: time="2025-12-16T12:18:06.729129974Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 16 12:18:06.738564 containerd[1497]: time="2025-12-16T12:18:06.738502205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6vl2q,Uid:44f78a6c-b473-4194-bec2-350576799125,Namespace:kube-system,Attempt:0,} returns sandbox id \"91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d\"" Dec 16 12:18:06.741748 containerd[1497]: time="2025-12-16T12:18:06.741685088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dszqk,Uid:d23f5a28-6365-412e-a54e-2ab6e763277d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d80a7016aeaf2acf74a79181ef7bcd28496d5eec798bb859033d59019c4dbe8\"" Dec 16 12:18:06.746045 containerd[1497]: time="2025-12-16T12:18:06.745997181Z" level=info msg="CreateContainer within sandbox \"1d80a7016aeaf2acf74a79181ef7bcd28496d5eec798bb859033d59019c4dbe8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:18:06.757873 containerd[1497]: time="2025-12-16T12:18:06.757223019Z" level=info msg="Container fb38ff829f3edcb62a68f9caa14461aeb230b1f83a88da33016558566448a776: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:18:06.765337 containerd[1497]: time="2025-12-16T12:18:06.765281580Z" level=info msg="CreateContainer within sandbox \"1d80a7016aeaf2acf74a79181ef7bcd28496d5eec798bb859033d59019c4dbe8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fb38ff829f3edcb62a68f9caa14461aeb230b1f83a88da33016558566448a776\"" Dec 16 12:18:06.766417 containerd[1497]: time="2025-12-16T12:18:06.766332244Z" level=info msg="StartContainer for \"fb38ff829f3edcb62a68f9caa14461aeb230b1f83a88da33016558566448a776\"" Dec 16 12:18:06.768305 containerd[1497]: time="2025-12-16T12:18:06.768266558Z" level=info msg="connecting to shim fb38ff829f3edcb62a68f9caa14461aeb230b1f83a88da33016558566448a776" address="unix:///run/containerd/s/47b79c6e1f6ce86cb21d24355bff7c468d55df71c18e45090de006110b852bed" protocol=ttrpc version=3 Dec 16 12:18:06.795093 systemd[1]: Started cri-containerd-fb38ff829f3edcb62a68f9caa14461aeb230b1f83a88da33016558566448a776.scope - libcontainer container fb38ff829f3edcb62a68f9caa14461aeb230b1f83a88da33016558566448a776. Dec 16 12:18:06.883538 containerd[1497]: time="2025-12-16T12:18:06.883403205Z" level=info msg="StartContainer for \"fb38ff829f3edcb62a68f9caa14461aeb230b1f83a88da33016558566448a776\" returns successfully" Dec 16 12:18:08.120727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount244227125.mount: Deactivated successfully. Dec 16 12:18:08.410962 containerd[1497]: time="2025-12-16T12:18:08.410820693Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:18:08.412051 containerd[1497]: time="2025-12-16T12:18:08.412020285Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Dec 16 12:18:08.416604 containerd[1497]: time="2025-12-16T12:18:08.416539890Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:18:08.418785 containerd[1497]: time="2025-12-16T12:18:08.418738535Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.689560986s" Dec 16 12:18:08.418998 containerd[1497]: time="2025-12-16T12:18:08.418789590Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 16 12:18:08.419731 containerd[1497]: time="2025-12-16T12:18:08.419701377Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 16 12:18:08.421681 containerd[1497]: time="2025-12-16T12:18:08.421280280Z" level=info msg="CreateContainer within sandbox \"1f38d2323703bc2b40d9d942a1465d17bdc59e0d4a5311c2a12596ef790156e0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 16 12:18:08.435469 containerd[1497]: time="2025-12-16T12:18:08.435415785Z" level=info msg="Container f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:18:08.443213 containerd[1497]: time="2025-12-16T12:18:08.443142371Z" level=info msg="CreateContainer within sandbox \"1f38d2323703bc2b40d9d942a1465d17bdc59e0d4a5311c2a12596ef790156e0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464\"" Dec 16 12:18:08.444384 containerd[1497]: time="2025-12-16T12:18:08.444262619Z" level=info msg="StartContainer for \"f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464\"" Dec 16 12:18:08.448461 containerd[1497]: time="2025-12-16T12:18:08.448360421Z" level=info msg="connecting to shim f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464" address="unix:///run/containerd/s/497b7d73ed6465aadf516c8c6b196f41107e1de392a23e87c24eac7a937bc760" protocol=ttrpc version=3 Dec 16 12:18:08.469053 systemd[1]: Started cri-containerd-f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464.scope - libcontainer container f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464. Dec 16 12:18:08.503261 containerd[1497]: time="2025-12-16T12:18:08.503220428Z" level=info msg="StartContainer for \"f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464\" returns successfully" Dec 16 12:18:09.025606 kubelet[2636]: I1216 12:18:09.025516 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dszqk" podStartSLOduration=4.023046191 podStartE2EDuration="4.023046191s" podCreationTimestamp="2025-12-16 12:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:18:07.010858992 +0000 UTC m=+6.137252847" watchObservedRunningTime="2025-12-16 12:18:09.023046191 +0000 UTC m=+8.149440006" Dec 16 12:18:09.026021 kubelet[2636]: I1216 12:18:09.025849 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xwjvb" podStartSLOduration=1.334150914 podStartE2EDuration="3.025824322s" podCreationTimestamp="2025-12-16 12:18:06 +0000 UTC" firstStartedPulling="2025-12-16 12:18:06.727912695 +0000 UTC m=+5.854306510" lastFinishedPulling="2025-12-16 12:18:08.419586063 +0000 UTC m=+7.545979918" observedRunningTime="2025-12-16 12:18:09.023023665 +0000 UTC m=+8.149417640" watchObservedRunningTime="2025-12-16 12:18:09.025824322 +0000 UTC m=+8.152218177" Dec 16 12:18:15.181884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount257665435.mount: Deactivated successfully. Dec 16 12:18:16.367953 update_engine[1484]: I20251216 12:18:16.367882 1484 update_attempter.cc:509] Updating boot flags... Dec 16 12:18:16.572852 containerd[1497]: time="2025-12-16T12:18:16.571515936Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Dec 16 12:18:16.577740 containerd[1497]: time="2025-12-16T12:18:16.577643515Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.157904929s" Dec 16 12:18:16.577886 containerd[1497]: time="2025-12-16T12:18:16.577782302Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 16 12:18:16.578550 containerd[1497]: time="2025-12-16T12:18:16.578497000Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:18:16.579411 containerd[1497]: time="2025-12-16T12:18:16.579369407Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:18:16.607165 containerd[1497]: time="2025-12-16T12:18:16.607121827Z" level=info msg="CreateContainer within sandbox \"91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 12:18:16.626906 containerd[1497]: time="2025-12-16T12:18:16.626441424Z" level=info msg="Container c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:18:16.626558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount107116356.mount: Deactivated successfully. Dec 16 12:18:16.660350 containerd[1497]: time="2025-12-16T12:18:16.660285056Z" level=info msg="CreateContainer within sandbox \"91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec\"" Dec 16 12:18:16.662810 containerd[1497]: time="2025-12-16T12:18:16.662748450Z" level=info msg="StartContainer for \"c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec\"" Dec 16 12:18:16.663780 containerd[1497]: time="2025-12-16T12:18:16.663721717Z" level=info msg="connecting to shim c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec" address="unix:///run/containerd/s/a480a53e3747d37aad564a28320177b9001e3b2f26072ebb80955ffd9d9b9c22" protocol=ttrpc version=3 Dec 16 12:18:16.710029 systemd[1]: Started cri-containerd-c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec.scope - libcontainer container c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec. Dec 16 12:18:16.791240 containerd[1497]: time="2025-12-16T12:18:16.791100546Z" level=info msg="StartContainer for \"c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec\" returns successfully" Dec 16 12:18:16.802504 systemd[1]: cri-containerd-c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec.scope: Deactivated successfully. Dec 16 12:18:16.843120 containerd[1497]: time="2025-12-16T12:18:16.842958763Z" level=info msg="received container exit event container_id:\"c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec\" id:\"c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec\" pid:3124 exited_at:{seconds:1765887496 nanos:833072621}" Dec 16 12:18:17.033023 containerd[1497]: time="2025-12-16T12:18:17.032971048Z" level=info msg="CreateContainer within sandbox \"91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 12:18:17.054142 containerd[1497]: time="2025-12-16T12:18:17.054087474Z" level=info msg="Container 6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:18:17.068816 containerd[1497]: time="2025-12-16T12:18:17.068754279Z" level=info msg="CreateContainer within sandbox \"91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9\"" Dec 16 12:18:17.070820 containerd[1497]: time="2025-12-16T12:18:17.070763447Z" level=info msg="StartContainer for \"6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9\"" Dec 16 12:18:17.074053 containerd[1497]: time="2025-12-16T12:18:17.073998399Z" level=info msg="connecting to shim 6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9" address="unix:///run/containerd/s/a480a53e3747d37aad564a28320177b9001e3b2f26072ebb80955ffd9d9b9c22" protocol=ttrpc version=3 Dec 16 12:18:17.098058 systemd[1]: Started cri-containerd-6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9.scope - libcontainer container 6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9. Dec 16 12:18:17.132307 containerd[1497]: time="2025-12-16T12:18:17.132268107Z" level=info msg="StartContainer for \"6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9\" returns successfully" Dec 16 12:18:17.145573 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:18:17.145797 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:18:17.145995 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:18:17.148684 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:18:17.150738 systemd[1]: cri-containerd-6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9.scope: Deactivated successfully. Dec 16 12:18:17.151774 containerd[1497]: time="2025-12-16T12:18:17.151737512Z" level=info msg="received container exit event container_id:\"6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9\" id:\"6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9\" pid:3172 exited_at:{seconds:1765887497 nanos:151044625}" Dec 16 12:18:17.175904 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:18:17.622766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec-rootfs.mount: Deactivated successfully. Dec 16 12:18:18.038862 containerd[1497]: time="2025-12-16T12:18:18.038308817Z" level=info msg="CreateContainer within sandbox \"91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 12:18:18.068377 containerd[1497]: time="2025-12-16T12:18:18.065440267Z" level=info msg="Container 507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:18:18.106144 containerd[1497]: time="2025-12-16T12:18:18.105646396Z" level=info msg="CreateContainer within sandbox \"91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8\"" Dec 16 12:18:18.108212 containerd[1497]: time="2025-12-16T12:18:18.106355080Z" level=info msg="StartContainer for \"507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8\"" Dec 16 12:18:18.112257 containerd[1497]: time="2025-12-16T12:18:18.109202936Z" level=info msg="connecting to shim 507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8" address="unix:///run/containerd/s/a480a53e3747d37aad564a28320177b9001e3b2f26072ebb80955ffd9d9b9c22" protocol=ttrpc version=3 Dec 16 12:18:18.143058 systemd[1]: Started cri-containerd-507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8.scope - libcontainer container 507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8. Dec 16 12:18:18.231220 containerd[1497]: time="2025-12-16T12:18:18.230271363Z" level=info msg="StartContainer for \"507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8\" returns successfully" Dec 16 12:18:18.230960 systemd[1]: cri-containerd-507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8.scope: Deactivated successfully. Dec 16 12:18:18.233468 containerd[1497]: time="2025-12-16T12:18:18.233418911Z" level=info msg="received container exit event container_id:\"507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8\" id:\"507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8\" pid:3218 exited_at:{seconds:1765887498 nanos:232935067}" Dec 16 12:18:18.620367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8-rootfs.mount: Deactivated successfully. Dec 16 12:18:19.045991 containerd[1497]: time="2025-12-16T12:18:19.045174084Z" level=info msg="CreateContainer within sandbox \"91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 12:18:19.062460 containerd[1497]: time="2025-12-16T12:18:19.062408387Z" level=info msg="Container 63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:18:19.081338 containerd[1497]: time="2025-12-16T12:18:19.081263119Z" level=info msg="CreateContainer within sandbox \"91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343\"" Dec 16 12:18:19.082939 containerd[1497]: time="2025-12-16T12:18:19.082383626Z" level=info msg="StartContainer for \"63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343\"" Dec 16 12:18:19.085827 containerd[1497]: time="2025-12-16T12:18:19.085008902Z" level=info msg="connecting to shim 63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343" address="unix:///run/containerd/s/a480a53e3747d37aad564a28320177b9001e3b2f26072ebb80955ffd9d9b9c22" protocol=ttrpc version=3 Dec 16 12:18:19.111168 systemd[1]: Started cri-containerd-63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343.scope - libcontainer container 63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343. Dec 16 12:18:19.142069 systemd[1]: cri-containerd-63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343.scope: Deactivated successfully. Dec 16 12:18:19.144866 containerd[1497]: time="2025-12-16T12:18:19.144032988Z" level=info msg="received container exit event container_id:\"63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343\" id:\"63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343\" pid:3258 exited_at:{seconds:1765887499 nanos:143589234}" Dec 16 12:18:19.148070 containerd[1497]: time="2025-12-16T12:18:19.145946706Z" level=info msg="StartContainer for \"63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343\" returns successfully" Dec 16 12:18:19.166677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343-rootfs.mount: Deactivated successfully. Dec 16 12:18:20.059916 containerd[1497]: time="2025-12-16T12:18:20.059641954Z" level=info msg="CreateContainer within sandbox \"91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 12:18:20.090443 containerd[1497]: time="2025-12-16T12:18:20.088464681Z" level=info msg="Container 9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:18:20.105939 containerd[1497]: time="2025-12-16T12:18:20.105865318Z" level=info msg="CreateContainer within sandbox \"91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704\"" Dec 16 12:18:20.106462 containerd[1497]: time="2025-12-16T12:18:20.106436928Z" level=info msg="StartContainer for \"9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704\"" Dec 16 12:18:20.115783 containerd[1497]: time="2025-12-16T12:18:20.109763856Z" level=info msg="connecting to shim 9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704" address="unix:///run/containerd/s/a480a53e3747d37aad564a28320177b9001e3b2f26072ebb80955ffd9d9b9c22" protocol=ttrpc version=3 Dec 16 12:18:20.136145 systemd[1]: Started cri-containerd-9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704.scope - libcontainer container 9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704. Dec 16 12:18:20.184661 containerd[1497]: time="2025-12-16T12:18:20.184620477Z" level=info msg="StartContainer for \"9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704\" returns successfully" Dec 16 12:18:20.365912 kubelet[2636]: I1216 12:18:20.365788 2636 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:18:20.427706 systemd[1]: Created slice kubepods-burstable-pod115e8574_6b45_4c4d_bb41_04b8fab5ffe9.slice - libcontainer container kubepods-burstable-pod115e8574_6b45_4c4d_bb41_04b8fab5ffe9.slice. Dec 16 12:18:20.433967 systemd[1]: Created slice kubepods-burstable-pod62d937d1_6ae3_4be5_a2ab_f70d30e881b6.slice - libcontainer container kubepods-burstable-pod62d937d1_6ae3_4be5_a2ab_f70d30e881b6.slice. Dec 16 12:18:20.587760 kubelet[2636]: I1216 12:18:20.587697 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqwj2\" (UniqueName: \"kubernetes.io/projected/115e8574-6b45-4c4d-bb41-04b8fab5ffe9-kube-api-access-rqwj2\") pod \"coredns-668d6bf9bc-cbbq2\" (UID: \"115e8574-6b45-4c4d-bb41-04b8fab5ffe9\") " pod="kube-system/coredns-668d6bf9bc-cbbq2" Dec 16 12:18:20.587760 kubelet[2636]: I1216 12:18:20.587757 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62d937d1-6ae3-4be5-a2ab-f70d30e881b6-config-volume\") pod \"coredns-668d6bf9bc-gt4t4\" (UID: \"62d937d1-6ae3-4be5-a2ab-f70d30e881b6\") " pod="kube-system/coredns-668d6bf9bc-gt4t4" Dec 16 12:18:20.587953 kubelet[2636]: I1216 12:18:20.587780 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/115e8574-6b45-4c4d-bb41-04b8fab5ffe9-config-volume\") pod \"coredns-668d6bf9bc-cbbq2\" (UID: \"115e8574-6b45-4c4d-bb41-04b8fab5ffe9\") " pod="kube-system/coredns-668d6bf9bc-cbbq2" Dec 16 12:18:20.587953 kubelet[2636]: I1216 12:18:20.587808 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlk7l\" (UniqueName: \"kubernetes.io/projected/62d937d1-6ae3-4be5-a2ab-f70d30e881b6-kube-api-access-nlk7l\") pod \"coredns-668d6bf9bc-gt4t4\" (UID: \"62d937d1-6ae3-4be5-a2ab-f70d30e881b6\") " pod="kube-system/coredns-668d6bf9bc-gt4t4" Dec 16 12:18:20.731827 containerd[1497]: time="2025-12-16T12:18:20.731707126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbbq2,Uid:115e8574-6b45-4c4d-bb41-04b8fab5ffe9,Namespace:kube-system,Attempt:0,}" Dec 16 12:18:20.738461 containerd[1497]: time="2025-12-16T12:18:20.738415909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gt4t4,Uid:62d937d1-6ae3-4be5-a2ab-f70d30e881b6,Namespace:kube-system,Attempt:0,}" Dec 16 12:18:21.097375 kubelet[2636]: I1216 12:18:21.095510 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6vl2q" podStartSLOduration=6.243167218 podStartE2EDuration="16.095492184s" podCreationTimestamp="2025-12-16 12:18:05 +0000 UTC" firstStartedPulling="2025-12-16 12:18:06.739987772 +0000 UTC m=+5.866381627" lastFinishedPulling="2025-12-16 12:18:16.592312738 +0000 UTC m=+15.718706593" observedRunningTime="2025-12-16 12:18:21.093444314 +0000 UTC m=+20.219838169" watchObservedRunningTime="2025-12-16 12:18:21.095492184 +0000 UTC m=+20.221886039" Dec 16 12:18:22.301440 systemd-networkd[1429]: cilium_host: Link UP Dec 16 12:18:22.301609 systemd-networkd[1429]: cilium_net: Link UP Dec 16 12:18:22.301754 systemd-networkd[1429]: cilium_host: Gained carrier Dec 16 12:18:22.301913 systemd-networkd[1429]: cilium_net: Gained carrier Dec 16 12:18:22.384072 systemd-networkd[1429]: cilium_net: Gained IPv6LL Dec 16 12:18:22.407030 systemd-networkd[1429]: cilium_vxlan: Link UP Dec 16 12:18:22.407594 systemd-networkd[1429]: cilium_vxlan: Gained carrier Dec 16 12:18:22.725857 kernel: NET: Registered PF_ALG protocol family Dec 16 12:18:22.784034 systemd-networkd[1429]: cilium_host: Gained IPv6LL Dec 16 12:18:23.363093 systemd-networkd[1429]: lxc_health: Link UP Dec 16 12:18:23.371379 systemd-networkd[1429]: lxc_health: Gained carrier Dec 16 12:18:23.800934 kernel: eth0: renamed from tmp0a0e1 Dec 16 12:18:23.801302 systemd-networkd[1429]: lxc92a6b61e4c89: Link UP Dec 16 12:18:23.805077 systemd-networkd[1429]: lxc704cbdd6cb0e: Link UP Dec 16 12:18:23.815859 kernel: eth0: renamed from tmp4a6b5 Dec 16 12:18:23.816641 systemd-networkd[1429]: lxc92a6b61e4c89: Gained carrier Dec 16 12:18:23.817526 systemd-networkd[1429]: lxc704cbdd6cb0e: Gained carrier Dec 16 12:18:23.983983 systemd-networkd[1429]: cilium_vxlan: Gained IPv6LL Dec 16 12:18:25.072001 systemd-networkd[1429]: lxc92a6b61e4c89: Gained IPv6LL Dec 16 12:18:25.136033 systemd-networkd[1429]: lxc_health: Gained IPv6LL Dec 16 12:18:25.520030 systemd-networkd[1429]: lxc704cbdd6cb0e: Gained IPv6LL Dec 16 12:18:27.713493 containerd[1497]: time="2025-12-16T12:18:27.713363555Z" level=info msg="connecting to shim 0a0e12b4f3b49eb860975e416dbe360845b468a2c441bbba846d7d349a00c252" address="unix:///run/containerd/s/e0a1b251d562750edd6c3115043a7132e848985f3d33c2a52b6facf557aa2cfa" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:18:27.716400 containerd[1497]: time="2025-12-16T12:18:27.716354904Z" level=info msg="connecting to shim 4a6b5e671c56ccd92c1c004af11877cb1c193b6626211111b430c004b20b7ef2" address="unix:///run/containerd/s/880b6d117b232420a7193363a9ba4feddd9bd5602147b9c3f535be971676e540" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:18:27.738154 systemd[1]: Started cri-containerd-0a0e12b4f3b49eb860975e416dbe360845b468a2c441bbba846d7d349a00c252.scope - libcontainer container 0a0e12b4f3b49eb860975e416dbe360845b468a2c441bbba846d7d349a00c252. Dec 16 12:18:27.753869 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:18:27.765050 systemd[1]: Started cri-containerd-4a6b5e671c56ccd92c1c004af11877cb1c193b6626211111b430c004b20b7ef2.scope - libcontainer container 4a6b5e671c56ccd92c1c004af11877cb1c193b6626211111b430c004b20b7ef2. Dec 16 12:18:27.778259 containerd[1497]: time="2025-12-16T12:18:27.778201112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbbq2,Uid:115e8574-6b45-4c4d-bb41-04b8fab5ffe9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a0e12b4f3b49eb860975e416dbe360845b468a2c441bbba846d7d349a00c252\"" Dec 16 12:18:27.781894 containerd[1497]: time="2025-12-16T12:18:27.781852698Z" level=info msg="CreateContainer within sandbox \"0a0e12b4f3b49eb860975e416dbe360845b468a2c441bbba846d7d349a00c252\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:18:27.782623 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:18:27.793046 containerd[1497]: time="2025-12-16T12:18:27.792994077Z" level=info msg="Container 48eacd8c9911c432467bcf3939c7d0d6a4e3316c64c716b3b695f4bd78b63ef8: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:18:27.809627 containerd[1497]: time="2025-12-16T12:18:27.809583370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gt4t4,Uid:62d937d1-6ae3-4be5-a2ab-f70d30e881b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a6b5e671c56ccd92c1c004af11877cb1c193b6626211111b430c004b20b7ef2\"" Dec 16 12:18:27.811353 containerd[1497]: time="2025-12-16T12:18:27.811314652Z" level=info msg="CreateContainer within sandbox \"0a0e12b4f3b49eb860975e416dbe360845b468a2c441bbba846d7d349a00c252\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"48eacd8c9911c432467bcf3939c7d0d6a4e3316c64c716b3b695f4bd78b63ef8\"" Dec 16 12:18:27.811735 containerd[1497]: time="2025-12-16T12:18:27.811713178Z" level=info msg="StartContainer for \"48eacd8c9911c432467bcf3939c7d0d6a4e3316c64c716b3b695f4bd78b63ef8\"" Dec 16 12:18:27.812692 containerd[1497]: time="2025-12-16T12:18:27.812664489Z" level=info msg="CreateContainer within sandbox \"4a6b5e671c56ccd92c1c004af11877cb1c193b6626211111b430c004b20b7ef2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:18:27.814114 containerd[1497]: time="2025-12-16T12:18:27.814082015Z" level=info msg="connecting to shim 48eacd8c9911c432467bcf3939c7d0d6a4e3316c64c716b3b695f4bd78b63ef8" address="unix:///run/containerd/s/e0a1b251d562750edd6c3115043a7132e848985f3d33c2a52b6facf557aa2cfa" protocol=ttrpc version=3 Dec 16 12:18:27.821858 containerd[1497]: time="2025-12-16T12:18:27.821311857Z" level=info msg="Container a4f3826a488f80e31b8507770dffd03b62ad9d718a6199c36c3b89c57791a920: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:18:27.827704 containerd[1497]: time="2025-12-16T12:18:27.827639275Z" level=info msg="CreateContainer within sandbox \"4a6b5e671c56ccd92c1c004af11877cb1c193b6626211111b430c004b20b7ef2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a4f3826a488f80e31b8507770dffd03b62ad9d718a6199c36c3b89c57791a920\"" Dec 16 12:18:27.828270 containerd[1497]: time="2025-12-16T12:18:27.828247666Z" level=info msg="StartContainer for \"a4f3826a488f80e31b8507770dffd03b62ad9d718a6199c36c3b89c57791a920\"" Dec 16 12:18:27.829655 containerd[1497]: time="2025-12-16T12:18:27.829579781Z" level=info msg="connecting to shim a4f3826a488f80e31b8507770dffd03b62ad9d718a6199c36c3b89c57791a920" address="unix:///run/containerd/s/880b6d117b232420a7193363a9ba4feddd9bd5602147b9c3f535be971676e540" protocol=ttrpc version=3 Dec 16 12:18:27.837043 systemd[1]: Started cri-containerd-48eacd8c9911c432467bcf3939c7d0d6a4e3316c64c716b3b695f4bd78b63ef8.scope - libcontainer container 48eacd8c9911c432467bcf3939c7d0d6a4e3316c64c716b3b695f4bd78b63ef8. Dec 16 12:18:27.858041 systemd[1]: Started cri-containerd-a4f3826a488f80e31b8507770dffd03b62ad9d718a6199c36c3b89c57791a920.scope - libcontainer container a4f3826a488f80e31b8507770dffd03b62ad9d718a6199c36c3b89c57791a920. Dec 16 12:18:27.883959 containerd[1497]: time="2025-12-16T12:18:27.883802621Z" level=info msg="StartContainer for \"48eacd8c9911c432467bcf3939c7d0d6a4e3316c64c716b3b695f4bd78b63ef8\" returns successfully" Dec 16 12:18:27.895185 containerd[1497]: time="2025-12-16T12:18:27.895148703Z" level=info msg="StartContainer for \"a4f3826a488f80e31b8507770dffd03b62ad9d718a6199c36c3b89c57791a920\" returns successfully" Dec 16 12:18:28.093143 kubelet[2636]: I1216 12:18:28.092894 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gt4t4" podStartSLOduration=22.092874007 podStartE2EDuration="22.092874007s" podCreationTimestamp="2025-12-16 12:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:18:28.092619458 +0000 UTC m=+27.219013393" watchObservedRunningTime="2025-12-16 12:18:28.092874007 +0000 UTC m=+27.219267862" Dec 16 12:18:28.116727 kubelet[2636]: I1216 12:18:28.116307 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cbbq2" podStartSLOduration=22.116189337 podStartE2EDuration="22.116189337s" podCreationTimestamp="2025-12-16 12:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:18:28.115864301 +0000 UTC m=+27.242258196" watchObservedRunningTime="2025-12-16 12:18:28.116189337 +0000 UTC m=+27.242583192" Dec 16 12:18:31.536419 systemd[1]: Started sshd@7-10.0.0.13:22-10.0.0.1:49722.service - OpenSSH per-connection server daemon (10.0.0.1:49722). Dec 16 12:18:31.612068 sshd[3986]: Accepted publickey for core from 10.0.0.1 port 49722 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:18:31.613458 sshd-session[3986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:31.620907 systemd-logind[1474]: New session 8 of user core. Dec 16 12:18:31.627019 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:18:31.781926 sshd[3989]: Connection closed by 10.0.0.1 port 49722 Dec 16 12:18:31.783102 sshd-session[3986]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:31.788266 systemd[1]: sshd@7-10.0.0.13:22-10.0.0.1:49722.service: Deactivated successfully. Dec 16 12:18:31.790093 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:18:31.790834 systemd-logind[1474]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:18:31.791872 systemd-logind[1474]: Removed session 8. Dec 16 12:18:36.804211 systemd[1]: Started sshd@8-10.0.0.13:22-10.0.0.1:49756.service - OpenSSH per-connection server daemon (10.0.0.1:49756). Dec 16 12:18:36.875067 sshd[4003]: Accepted publickey for core from 10.0.0.1 port 49756 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:18:36.876865 sshd-session[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:36.886716 systemd-logind[1474]: New session 9 of user core. Dec 16 12:18:36.902129 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:18:37.039196 sshd[4006]: Connection closed by 10.0.0.1 port 49756 Dec 16 12:18:37.039546 sshd-session[4003]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:37.044079 systemd[1]: sshd@8-10.0.0.13:22-10.0.0.1:49756.service: Deactivated successfully. Dec 16 12:18:37.046056 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:18:37.047067 systemd-logind[1474]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:18:37.048619 systemd-logind[1474]: Removed session 9. Dec 16 12:18:42.055892 systemd[1]: Started sshd@9-10.0.0.13:22-10.0.0.1:39520.service - OpenSSH per-connection server daemon (10.0.0.1:39520). Dec 16 12:18:42.128568 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 39520 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:18:42.130935 sshd-session[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:42.135516 systemd-logind[1474]: New session 10 of user core. Dec 16 12:18:42.146660 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:18:42.292037 sshd[4025]: Connection closed by 10.0.0.1 port 39520 Dec 16 12:18:42.293300 sshd-session[4022]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:42.302073 systemd[1]: sshd@9-10.0.0.13:22-10.0.0.1:39520.service: Deactivated successfully. Dec 16 12:18:42.304341 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:18:42.307621 systemd-logind[1474]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:18:42.310192 systemd-logind[1474]: Removed session 10. Dec 16 12:18:47.313135 systemd[1]: Started sshd@10-10.0.0.13:22-10.0.0.1:39542.service - OpenSSH per-connection server daemon (10.0.0.1:39542). Dec 16 12:18:47.384501 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 39542 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:18:47.386128 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:47.391757 systemd-logind[1474]: New session 11 of user core. Dec 16 12:18:47.407143 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:18:47.536684 sshd[4042]: Connection closed by 10.0.0.1 port 39542 Dec 16 12:18:47.538475 sshd-session[4039]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:47.547123 systemd[1]: sshd@10-10.0.0.13:22-10.0.0.1:39542.service: Deactivated successfully. Dec 16 12:18:47.550059 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:18:47.551245 systemd-logind[1474]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:18:47.555265 systemd[1]: Started sshd@11-10.0.0.13:22-10.0.0.1:39568.service - OpenSSH per-connection server daemon (10.0.0.1:39568). Dec 16 12:18:47.556515 systemd-logind[1474]: Removed session 11. Dec 16 12:18:47.631137 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 39568 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:18:47.634344 sshd-session[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:47.639609 systemd-logind[1474]: New session 12 of user core. Dec 16 12:18:47.649120 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:18:47.831712 sshd[4059]: Connection closed by 10.0.0.1 port 39568 Dec 16 12:18:47.832127 sshd-session[4056]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:47.841474 systemd[1]: sshd@11-10.0.0.13:22-10.0.0.1:39568.service: Deactivated successfully. Dec 16 12:18:47.843663 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:18:47.844539 systemd-logind[1474]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:18:47.848520 systemd[1]: Started sshd@12-10.0.0.13:22-10.0.0.1:39584.service - OpenSSH per-connection server daemon (10.0.0.1:39584). Dec 16 12:18:47.849988 systemd-logind[1474]: Removed session 12. Dec 16 12:18:47.914934 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 39584 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:18:47.917567 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:47.924466 systemd-logind[1474]: New session 13 of user core. Dec 16 12:18:47.935107 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:18:48.075344 sshd[4074]: Connection closed by 10.0.0.1 port 39584 Dec 16 12:18:48.075761 sshd-session[4071]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:48.080358 systemd[1]: sshd@12-10.0.0.13:22-10.0.0.1:39584.service: Deactivated successfully. Dec 16 12:18:48.082359 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:18:48.083771 systemd-logind[1474]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:18:48.085129 systemd-logind[1474]: Removed session 13. Dec 16 12:18:53.092905 systemd[1]: Started sshd@13-10.0.0.13:22-10.0.0.1:58268.service - OpenSSH per-connection server daemon (10.0.0.1:58268). Dec 16 12:18:53.153921 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 58268 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:18:53.154623 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:53.161236 systemd-logind[1474]: New session 14 of user core. Dec 16 12:18:53.182076 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:18:53.336670 sshd[4092]: Connection closed by 10.0.0.1 port 58268 Dec 16 12:18:53.336845 sshd-session[4089]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:53.340962 systemd[1]: sshd@13-10.0.0.13:22-10.0.0.1:58268.service: Deactivated successfully. Dec 16 12:18:53.343427 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:18:53.347196 systemd-logind[1474]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:18:53.348289 systemd-logind[1474]: Removed session 14. Dec 16 12:18:58.356040 systemd[1]: Started sshd@14-10.0.0.13:22-10.0.0.1:58278.service - OpenSSH per-connection server daemon (10.0.0.1:58278). Dec 16 12:18:58.427349 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 58278 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:18:58.427993 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:58.436208 systemd-logind[1474]: New session 15 of user core. Dec 16 12:18:58.450132 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:18:58.587747 sshd[4109]: Connection closed by 10.0.0.1 port 58278 Dec 16 12:18:58.588464 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:58.599626 systemd[1]: sshd@14-10.0.0.13:22-10.0.0.1:58278.service: Deactivated successfully. Dec 16 12:18:58.602885 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:18:58.604728 systemd-logind[1474]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:18:58.607071 systemd[1]: Started sshd@15-10.0.0.13:22-10.0.0.1:58294.service - OpenSSH per-connection server daemon (10.0.0.1:58294). Dec 16 12:18:58.609380 systemd-logind[1474]: Removed session 15. Dec 16 12:18:58.674565 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 58294 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:18:58.676059 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:58.680474 systemd-logind[1474]: New session 16 of user core. Dec 16 12:18:58.694193 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:18:58.932860 sshd[4126]: Connection closed by 10.0.0.1 port 58294 Dec 16 12:18:58.934312 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:58.952579 systemd[1]: sshd@15-10.0.0.13:22-10.0.0.1:58294.service: Deactivated successfully. Dec 16 12:18:58.956568 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:18:58.957615 systemd-logind[1474]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:18:58.963916 systemd[1]: Started sshd@16-10.0.0.13:22-10.0.0.1:58300.service - OpenSSH per-connection server daemon (10.0.0.1:58300). Dec 16 12:18:58.965326 systemd-logind[1474]: Removed session 16. Dec 16 12:18:59.031425 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 58300 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:18:59.033055 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:59.038874 systemd-logind[1474]: New session 17 of user core. Dec 16 12:18:59.049099 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:18:59.751240 sshd[4141]: Connection closed by 10.0.0.1 port 58300 Dec 16 12:18:59.751927 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:59.762960 systemd[1]: sshd@16-10.0.0.13:22-10.0.0.1:58300.service: Deactivated successfully. Dec 16 12:18:59.771749 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:18:59.773420 systemd-logind[1474]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:18:59.777254 systemd[1]: Started sshd@17-10.0.0.13:22-10.0.0.1:58306.service - OpenSSH per-connection server daemon (10.0.0.1:58306). Dec 16 12:18:59.780277 systemd-logind[1474]: Removed session 17. Dec 16 12:18:59.837798 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 58306 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:18:59.839260 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:59.846288 systemd-logind[1474]: New session 18 of user core. Dec 16 12:18:59.852022 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:19:00.082626 sshd[4163]: Connection closed by 10.0.0.1 port 58306 Dec 16 12:19:00.083047 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:00.096128 systemd[1]: sshd@17-10.0.0.13:22-10.0.0.1:58306.service: Deactivated successfully. Dec 16 12:19:00.098191 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:19:00.100570 systemd-logind[1474]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:19:00.104124 systemd[1]: Started sshd@18-10.0.0.13:22-10.0.0.1:58320.service - OpenSSH per-connection server daemon (10.0.0.1:58320). Dec 16 12:19:00.105002 systemd-logind[1474]: Removed session 18. Dec 16 12:19:00.174180 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 58320 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:19:00.175701 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:00.180349 systemd-logind[1474]: New session 19 of user core. Dec 16 12:19:00.190288 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:19:00.309329 sshd[4178]: Connection closed by 10.0.0.1 port 58320 Dec 16 12:19:00.309886 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:00.314707 systemd[1]: sshd@18-10.0.0.13:22-10.0.0.1:58320.service: Deactivated successfully. Dec 16 12:19:00.317190 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:19:00.319567 systemd-logind[1474]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:19:00.321022 systemd-logind[1474]: Removed session 19. Dec 16 12:19:05.328480 systemd[1]: Started sshd@19-10.0.0.13:22-10.0.0.1:44380.service - OpenSSH per-connection server daemon (10.0.0.1:44380). Dec 16 12:19:05.388650 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 44380 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:19:05.390523 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:05.395245 systemd-logind[1474]: New session 20 of user core. Dec 16 12:19:05.403035 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:19:05.526909 sshd[4198]: Connection closed by 10.0.0.1 port 44380 Dec 16 12:19:05.527687 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:05.531697 systemd[1]: sshd@19-10.0.0.13:22-10.0.0.1:44380.service: Deactivated successfully. Dec 16 12:19:05.536418 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:19:05.537517 systemd-logind[1474]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:19:05.539229 systemd-logind[1474]: Removed session 20. Dec 16 12:19:10.543327 systemd[1]: Started sshd@20-10.0.0.13:22-10.0.0.1:44468.service - OpenSSH per-connection server daemon (10.0.0.1:44468). Dec 16 12:19:10.613176 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 44468 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:19:10.614616 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:10.618850 systemd-logind[1474]: New session 21 of user core. Dec 16 12:19:10.635996 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:19:10.762864 sshd[4217]: Connection closed by 10.0.0.1 port 44468 Dec 16 12:19:10.763338 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:10.766607 systemd[1]: sshd@20-10.0.0.13:22-10.0.0.1:44468.service: Deactivated successfully. Dec 16 12:19:10.768425 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:19:10.769215 systemd-logind[1474]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:19:10.770364 systemd-logind[1474]: Removed session 21. Dec 16 12:19:15.774373 systemd[1]: Started sshd@21-10.0.0.13:22-10.0.0.1:35898.service - OpenSSH per-connection server daemon (10.0.0.1:35898). Dec 16 12:19:15.852286 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 35898 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:19:15.854160 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:15.858658 systemd-logind[1474]: New session 22 of user core. Dec 16 12:19:15.868043 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:19:15.985878 sshd[4233]: Connection closed by 10.0.0.1 port 35898 Dec 16 12:19:15.986199 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:15.989855 systemd[1]: sshd@21-10.0.0.13:22-10.0.0.1:35898.service: Deactivated successfully. Dec 16 12:19:15.991886 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:19:15.992764 systemd-logind[1474]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:19:15.994091 systemd-logind[1474]: Removed session 22. Dec 16 12:19:21.004093 systemd[1]: Started sshd@22-10.0.0.13:22-10.0.0.1:34494.service - OpenSSH per-connection server daemon (10.0.0.1:34494). Dec 16 12:19:21.075625 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 34494 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:19:21.077056 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:21.082207 systemd-logind[1474]: New session 23 of user core. Dec 16 12:19:21.091075 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:19:21.216228 sshd[4250]: Connection closed by 10.0.0.1 port 34494 Dec 16 12:19:21.216369 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:21.230256 systemd[1]: sshd@22-10.0.0.13:22-10.0.0.1:34494.service: Deactivated successfully. Dec 16 12:19:21.232190 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:19:21.233706 systemd-logind[1474]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:19:21.236031 systemd[1]: Started sshd@23-10.0.0.13:22-10.0.0.1:34510.service - OpenSSH per-connection server daemon (10.0.0.1:34510). Dec 16 12:19:21.244906 systemd-logind[1474]: Removed session 23. Dec 16 12:19:21.305312 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 34510 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:19:21.306720 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:21.311402 systemd-logind[1474]: New session 24 of user core. Dec 16 12:19:21.334087 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:19:23.402759 containerd[1497]: time="2025-12-16T12:19:23.402677244Z" level=info msg="StopContainer for \"f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464\" with timeout 30 (s)" Dec 16 12:19:23.417194 containerd[1497]: time="2025-12-16T12:19:23.417137512Z" level=info msg="Stop container \"f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464\" with signal terminated" Dec 16 12:19:23.430852 containerd[1497]: time="2025-12-16T12:19:23.430791708Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:19:23.432376 systemd[1]: cri-containerd-f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464.scope: Deactivated successfully. Dec 16 12:19:23.434665 containerd[1497]: time="2025-12-16T12:19:23.433723161Z" level=info msg="received container exit event container_id:\"f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464\" id:\"f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464\" pid:3040 exited_at:{seconds:1765887563 nanos:433402244}" Dec 16 12:19:23.444668 containerd[1497]: time="2025-12-16T12:19:23.444393184Z" level=info msg="StopContainer for \"9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704\" with timeout 2 (s)" Dec 16 12:19:23.444936 containerd[1497]: time="2025-12-16T12:19:23.444800300Z" level=info msg="Stop container \"9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704\" with signal terminated" Dec 16 12:19:23.453728 systemd-networkd[1429]: lxc_health: Link DOWN Dec 16 12:19:23.453734 systemd-networkd[1429]: lxc_health: Lost carrier Dec 16 12:19:23.468230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464-rootfs.mount: Deactivated successfully. Dec 16 12:19:23.475747 systemd[1]: cri-containerd-9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704.scope: Deactivated successfully. Dec 16 12:19:23.476328 systemd[1]: cri-containerd-9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704.scope: Consumed 6.711s CPU time, 121.7M memory peak, 176K read from disk, 12.9M written to disk. Dec 16 12:19:23.478061 containerd[1497]: time="2025-12-16T12:19:23.478009358Z" level=info msg="received container exit event container_id:\"9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704\" id:\"9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704\" pid:3294 exited_at:{seconds:1765887563 nanos:477743120}" Dec 16 12:19:23.484862 containerd[1497]: time="2025-12-16T12:19:23.484670337Z" level=info msg="StopContainer for \"f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464\" returns successfully" Dec 16 12:19:23.485489 containerd[1497]: time="2025-12-16T12:19:23.485448370Z" level=info msg="StopPodSandbox for \"1f38d2323703bc2b40d9d942a1465d17bdc59e0d4a5311c2a12596ef790156e0\"" Dec 16 12:19:23.493504 containerd[1497]: time="2025-12-16T12:19:23.493445257Z" level=info msg="Container to stop \"f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:19:23.501571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704-rootfs.mount: Deactivated successfully. Dec 16 12:19:23.503060 systemd[1]: cri-containerd-1f38d2323703bc2b40d9d942a1465d17bdc59e0d4a5311c2a12596ef790156e0.scope: Deactivated successfully. Dec 16 12:19:23.507010 containerd[1497]: time="2025-12-16T12:19:23.506972094Z" level=info msg="received sandbox exit event container_id:\"1f38d2323703bc2b40d9d942a1465d17bdc59e0d4a5311c2a12596ef790156e0\" id:\"1f38d2323703bc2b40d9d942a1465d17bdc59e0d4a5311c2a12596ef790156e0\" exit_status:137 exited_at:{seconds:1765887563 nanos:506577018}" monitor_name=podsandbox Dec 16 12:19:23.516793 containerd[1497]: time="2025-12-16T12:19:23.516738765Z" level=info msg="StopContainer for \"9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704\" returns successfully" Dec 16 12:19:23.517578 containerd[1497]: time="2025-12-16T12:19:23.517468559Z" level=info msg="StopPodSandbox for \"91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d\"" Dec 16 12:19:23.517578 containerd[1497]: time="2025-12-16T12:19:23.517538358Z" level=info msg="Container to stop \"c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:19:23.517846 containerd[1497]: time="2025-12-16T12:19:23.517560318Z" level=info msg="Container to stop \"6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:19:23.517846 containerd[1497]: time="2025-12-16T12:19:23.517778956Z" level=info msg="Container to stop \"507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:19:23.517846 containerd[1497]: time="2025-12-16T12:19:23.517796996Z" level=info msg="Container to stop \"63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:19:23.517846 containerd[1497]: time="2025-12-16T12:19:23.517806356Z" level=info msg="Container to stop \"9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:19:23.526444 systemd[1]: cri-containerd-91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d.scope: Deactivated successfully. Dec 16 12:19:23.531370 containerd[1497]: time="2025-12-16T12:19:23.531169874Z" level=info msg="received sandbox exit event container_id:\"91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d\" id:\"91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d\" exit_status:137 exited_at:{seconds:1765887563 nanos:529562208}" monitor_name=podsandbox Dec 16 12:19:23.537520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f38d2323703bc2b40d9d942a1465d17bdc59e0d4a5311c2a12596ef790156e0-rootfs.mount: Deactivated successfully. Dec 16 12:19:23.548604 containerd[1497]: time="2025-12-16T12:19:23.548542756Z" level=info msg="shim disconnected" id=1f38d2323703bc2b40d9d942a1465d17bdc59e0d4a5311c2a12596ef790156e0 namespace=k8s.io Dec 16 12:19:23.553497 containerd[1497]: time="2025-12-16T12:19:23.548599675Z" level=warning msg="cleaning up after shim disconnected" id=1f38d2323703bc2b40d9d942a1465d17bdc59e0d4a5311c2a12596ef790156e0 namespace=k8s.io Dec 16 12:19:23.553497 containerd[1497]: time="2025-12-16T12:19:23.553494191Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 12:19:23.556157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d-rootfs.mount: Deactivated successfully. Dec 16 12:19:23.561675 containerd[1497]: time="2025-12-16T12:19:23.561598277Z" level=info msg="shim disconnected" id=91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d namespace=k8s.io Dec 16 12:19:23.561675 containerd[1497]: time="2025-12-16T12:19:23.561635516Z" level=warning msg="cleaning up after shim disconnected" id=91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d namespace=k8s.io Dec 16 12:19:23.561675 containerd[1497]: time="2025-12-16T12:19:23.561665356Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 12:19:23.567616 containerd[1497]: time="2025-12-16T12:19:23.567196386Z" level=info msg="received sandbox container exit event sandbox_id:\"1f38d2323703bc2b40d9d942a1465d17bdc59e0d4a5311c2a12596ef790156e0\" exit_status:137 exited_at:{seconds:1765887563 nanos:506577018}" monitor_name=criService Dec 16 12:19:23.568087 containerd[1497]: time="2025-12-16T12:19:23.568032458Z" level=info msg="TearDown network for sandbox \"1f38d2323703bc2b40d9d942a1465d17bdc59e0d4a5311c2a12596ef790156e0\" successfully" Dec 16 12:19:23.568087 containerd[1497]: time="2025-12-16T12:19:23.568075538Z" level=info msg="StopPodSandbox for \"1f38d2323703bc2b40d9d942a1465d17bdc59e0d4a5311c2a12596ef790156e0\" returns successfully" Dec 16 12:19:23.568708 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f38d2323703bc2b40d9d942a1465d17bdc59e0d4a5311c2a12596ef790156e0-shm.mount: Deactivated successfully. Dec 16 12:19:23.574824 containerd[1497]: time="2025-12-16T12:19:23.574760597Z" level=info msg="received sandbox container exit event sandbox_id:\"91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d\" exit_status:137 exited_at:{seconds:1765887563 nanos:529562208}" monitor_name=criService Dec 16 12:19:23.575945 containerd[1497]: time="2025-12-16T12:19:23.575792867Z" level=info msg="TearDown network for sandbox \"91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d\" successfully" Dec 16 12:19:23.575945 containerd[1497]: time="2025-12-16T12:19:23.575906906Z" level=info msg="StopPodSandbox for \"91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d\" returns successfully" Dec 16 12:19:23.648684 kubelet[2636]: I1216 12:19:23.648629 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-host-proc-sys-kernel\") pod \"44f78a6c-b473-4194-bec2-350576799125\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " Dec 16 12:19:23.648684 kubelet[2636]: I1216 12:19:23.648675 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-cni-path\") pod \"44f78a6c-b473-4194-bec2-350576799125\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " Dec 16 12:19:23.648684 kubelet[2636]: I1216 12:19:23.648696 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-cilium-run\") pod \"44f78a6c-b473-4194-bec2-350576799125\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " Dec 16 12:19:23.649152 kubelet[2636]: I1216 12:19:23.648716 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-bpf-maps\") pod \"44f78a6c-b473-4194-bec2-350576799125\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " Dec 16 12:19:23.649152 kubelet[2636]: I1216 12:19:23.648741 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44f78a6c-b473-4194-bec2-350576799125-clustermesh-secrets\") pod \"44f78a6c-b473-4194-bec2-350576799125\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " Dec 16 12:19:23.649152 kubelet[2636]: I1216 12:19:23.648759 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44f78a6c-b473-4194-bec2-350576799125-cilium-config-path\") pod \"44f78a6c-b473-4194-bec2-350576799125\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " Dec 16 12:19:23.649152 kubelet[2636]: I1216 12:19:23.648778 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44f78a6c-b473-4194-bec2-350576799125-hubble-tls\") pod \"44f78a6c-b473-4194-bec2-350576799125\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " Dec 16 12:19:23.649152 kubelet[2636]: I1216 12:19:23.648795 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-cilium-cgroup\") pod \"44f78a6c-b473-4194-bec2-350576799125\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " Dec 16 12:19:23.649152 kubelet[2636]: I1216 12:19:23.648811 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-lib-modules\") pod \"44f78a6c-b473-4194-bec2-350576799125\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " Dec 16 12:19:23.649289 kubelet[2636]: I1216 12:19:23.648845 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-xtables-lock\") pod \"44f78a6c-b473-4194-bec2-350576799125\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " Dec 16 12:19:23.649289 kubelet[2636]: I1216 12:19:23.648864 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-hostproc\") pod \"44f78a6c-b473-4194-bec2-350576799125\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " Dec 16 12:19:23.649289 kubelet[2636]: I1216 12:19:23.648881 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0-cilium-config-path\") pod \"77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0\" (UID: \"77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0\") " Dec 16 12:19:23.649289 kubelet[2636]: I1216 12:19:23.648896 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-etc-cni-netd\") pod \"44f78a6c-b473-4194-bec2-350576799125\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " Dec 16 12:19:23.649289 kubelet[2636]: I1216 12:19:23.648912 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jdlp\" (UniqueName: \"kubernetes.io/projected/44f78a6c-b473-4194-bec2-350576799125-kube-api-access-6jdlp\") pod \"44f78a6c-b473-4194-bec2-350576799125\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " Dec 16 12:19:23.649289 kubelet[2636]: I1216 12:19:23.648930 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-host-proc-sys-net\") pod \"44f78a6c-b473-4194-bec2-350576799125\" (UID: \"44f78a6c-b473-4194-bec2-350576799125\") " Dec 16 12:19:23.649410 kubelet[2636]: I1216 12:19:23.648947 2636 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvpsn\" (UniqueName: \"kubernetes.io/projected/77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0-kube-api-access-lvpsn\") pod \"77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0\" (UID: \"77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0\") " Dec 16 12:19:23.649640 kubelet[2636]: I1216 12:19:23.649608 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "44f78a6c-b473-4194-bec2-350576799125" (UID: "44f78a6c-b473-4194-bec2-350576799125"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:19:23.649759 kubelet[2636]: I1216 12:19:23.649675 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "44f78a6c-b473-4194-bec2-350576799125" (UID: "44f78a6c-b473-4194-bec2-350576799125"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:19:23.649759 kubelet[2636]: I1216 12:19:23.649697 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "44f78a6c-b473-4194-bec2-350576799125" (UID: "44f78a6c-b473-4194-bec2-350576799125"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:19:23.650043 kubelet[2636]: I1216 12:19:23.649933 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-cni-path" (OuterVolumeSpecName: "cni-path") pod "44f78a6c-b473-4194-bec2-350576799125" (UID: "44f78a6c-b473-4194-bec2-350576799125"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:19:23.650465 kubelet[2636]: I1216 12:19:23.650439 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "44f78a6c-b473-4194-bec2-350576799125" (UID: "44f78a6c-b473-4194-bec2-350576799125"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:19:23.650570 kubelet[2636]: I1216 12:19:23.650553 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "44f78a6c-b473-4194-bec2-350576799125" (UID: "44f78a6c-b473-4194-bec2-350576799125"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:19:23.651324 kubelet[2636]: I1216 12:19:23.651233 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "44f78a6c-b473-4194-bec2-350576799125" (UID: "44f78a6c-b473-4194-bec2-350576799125"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:19:23.652339 kubelet[2636]: I1216 12:19:23.652300 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "44f78a6c-b473-4194-bec2-350576799125" (UID: "44f78a6c-b473-4194-bec2-350576799125"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:19:23.653823 kubelet[2636]: I1216 12:19:23.653579 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44f78a6c-b473-4194-bec2-350576799125-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "44f78a6c-b473-4194-bec2-350576799125" (UID: "44f78a6c-b473-4194-bec2-350576799125"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:19:23.653823 kubelet[2636]: I1216 12:19:23.653673 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0-kube-api-access-lvpsn" (OuterVolumeSpecName: "kube-api-access-lvpsn") pod "77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0" (UID: "77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0"). InnerVolumeSpecName "kube-api-access-lvpsn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:19:23.653823 kubelet[2636]: I1216 12:19:23.653714 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "44f78a6c-b473-4194-bec2-350576799125" (UID: "44f78a6c-b473-4194-bec2-350576799125"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:19:23.653823 kubelet[2636]: I1216 12:19:23.653741 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-hostproc" (OuterVolumeSpecName: "hostproc") pod "44f78a6c-b473-4194-bec2-350576799125" (UID: "44f78a6c-b473-4194-bec2-350576799125"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:19:23.655307 kubelet[2636]: I1216 12:19:23.655274 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44f78a6c-b473-4194-bec2-350576799125-kube-api-access-6jdlp" (OuterVolumeSpecName: "kube-api-access-6jdlp") pod "44f78a6c-b473-4194-bec2-350576799125" (UID: "44f78a6c-b473-4194-bec2-350576799125"). InnerVolumeSpecName "kube-api-access-6jdlp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:19:23.658794 kubelet[2636]: I1216 12:19:23.658748 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44f78a6c-b473-4194-bec2-350576799125-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "44f78a6c-b473-4194-bec2-350576799125" (UID: "44f78a6c-b473-4194-bec2-350576799125"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:19:23.659900 kubelet[2636]: I1216 12:19:23.659860 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0" (UID: "77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:19:23.660417 kubelet[2636]: I1216 12:19:23.660371 2636 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44f78a6c-b473-4194-bec2-350576799125-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "44f78a6c-b473-4194-bec2-350576799125" (UID: "44f78a6c-b473-4194-bec2-350576799125"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:19:23.750063 kubelet[2636]: I1216 12:19:23.749991 2636 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 16 12:19:23.750063 kubelet[2636]: I1216 12:19:23.750036 2636 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 16 12:19:23.750063 kubelet[2636]: I1216 12:19:23.750053 2636 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 16 12:19:23.750063 kubelet[2636]: I1216 12:19:23.750064 2636 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6jdlp\" (UniqueName: \"kubernetes.io/projected/44f78a6c-b473-4194-bec2-350576799125-kube-api-access-6jdlp\") on node \"localhost\" DevicePath \"\"" Dec 16 12:19:23.750063 kubelet[2636]: I1216 12:19:23.750073 2636 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 16 12:19:23.750314 kubelet[2636]: I1216 12:19:23.750090 2636 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lvpsn\" (UniqueName: \"kubernetes.io/projected/77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0-kube-api-access-lvpsn\") on node \"localhost\" DevicePath \"\"" Dec 16 12:19:23.750314 kubelet[2636]: I1216 12:19:23.750099 2636 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 16 12:19:23.750314 kubelet[2636]: I1216 12:19:23.750106 2636 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 16 12:19:23.750314 kubelet[2636]: I1216 12:19:23.750114 2636 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 16 12:19:23.750314 kubelet[2636]: I1216 12:19:23.750121 2636 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 16 12:19:23.750314 kubelet[2636]: I1216 12:19:23.750129 2636 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44f78a6c-b473-4194-bec2-350576799125-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 16 12:19:23.750314 kubelet[2636]: I1216 12:19:23.750136 2636 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44f78a6c-b473-4194-bec2-350576799125-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 16 12:19:23.750314 kubelet[2636]: I1216 12:19:23.750144 2636 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44f78a6c-b473-4194-bec2-350576799125-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 16 12:19:23.750466 kubelet[2636]: I1216 12:19:23.750152 2636 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 16 12:19:23.750466 kubelet[2636]: I1216 12:19:23.750168 2636 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 16 12:19:23.750466 kubelet[2636]: I1216 12:19:23.750183 2636 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44f78a6c-b473-4194-bec2-350576799125-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 16 12:19:24.212976 kubelet[2636]: I1216 12:19:24.212934 2636 scope.go:117] "RemoveContainer" containerID="9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704" Dec 16 12:19:24.216162 containerd[1497]: time="2025-12-16T12:19:24.216107607Z" level=info msg="RemoveContainer for \"9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704\"" Dec 16 12:19:24.221909 systemd[1]: Removed slice kubepods-burstable-pod44f78a6c_b473_4194_bec2_350576799125.slice - libcontainer container kubepods-burstable-pod44f78a6c_b473_4194_bec2_350576799125.slice. Dec 16 12:19:24.222025 systemd[1]: kubepods-burstable-pod44f78a6c_b473_4194_bec2_350576799125.slice: Consumed 6.816s CPU time, 122.1M memory peak, 216K read from disk, 12.9M written to disk. Dec 16 12:19:24.225400 containerd[1497]: time="2025-12-16T12:19:24.225353536Z" level=info msg="RemoveContainer for \"9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704\" returns successfully" Dec 16 12:19:24.226024 kubelet[2636]: I1216 12:19:24.225953 2636 scope.go:117] "RemoveContainer" containerID="63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343" Dec 16 12:19:24.228309 systemd[1]: Removed slice kubepods-besteffort-pod77a3d05c_dc8c_40bd_ab91_8f3b25fd62c0.slice - libcontainer container kubepods-besteffort-pod77a3d05c_dc8c_40bd_ab91_8f3b25fd62c0.slice. Dec 16 12:19:24.234306 containerd[1497]: time="2025-12-16T12:19:24.234248187Z" level=info msg="RemoveContainer for \"63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343\"" Dec 16 12:19:24.240353 containerd[1497]: time="2025-12-16T12:19:24.240267980Z" level=info msg="RemoveContainer for \"63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343\" returns successfully" Dec 16 12:19:24.240692 kubelet[2636]: I1216 12:19:24.240662 2636 scope.go:117] "RemoveContainer" containerID="507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8" Dec 16 12:19:24.243855 containerd[1497]: time="2025-12-16T12:19:24.243144238Z" level=info msg="RemoveContainer for \"507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8\"" Dec 16 12:19:24.248811 containerd[1497]: time="2025-12-16T12:19:24.248748914Z" level=info msg="RemoveContainer for \"507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8\" returns successfully" Dec 16 12:19:24.249227 kubelet[2636]: I1216 12:19:24.249198 2636 scope.go:117] "RemoveContainer" containerID="6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9" Dec 16 12:19:24.252862 containerd[1497]: time="2025-12-16T12:19:24.252733324Z" level=info msg="RemoveContainer for \"6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9\"" Dec 16 12:19:24.257233 containerd[1497]: time="2025-12-16T12:19:24.257193969Z" level=info msg="RemoveContainer for \"6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9\" returns successfully" Dec 16 12:19:24.257487 kubelet[2636]: I1216 12:19:24.257457 2636 scope.go:117] "RemoveContainer" containerID="c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec" Dec 16 12:19:24.259543 containerd[1497]: time="2025-12-16T12:19:24.259518751Z" level=info msg="RemoveContainer for \"c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec\"" Dec 16 12:19:24.262796 containerd[1497]: time="2025-12-16T12:19:24.262700366Z" level=info msg="RemoveContainer for \"c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec\" returns successfully" Dec 16 12:19:24.263227 kubelet[2636]: I1216 12:19:24.262934 2636 scope.go:117] "RemoveContainer" containerID="9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704" Dec 16 12:19:24.263330 containerd[1497]: time="2025-12-16T12:19:24.263156283Z" level=error msg="ContainerStatus for \"9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704\": not found" Dec 16 12:19:24.263559 kubelet[2636]: E1216 12:19:24.263527 2636 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704\": not found" containerID="9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704" Dec 16 12:19:24.268031 kubelet[2636]: I1216 12:19:24.267874 2636 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704"} err="failed to get container status \"9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f0afb4c98f17aa04077631994feec67d242e690d4fed21dc616369f06d04704\": not found" Dec 16 12:19:24.268106 kubelet[2636]: I1216 12:19:24.268043 2636 scope.go:117] "RemoveContainer" containerID="63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343" Dec 16 12:19:24.268380 containerd[1497]: time="2025-12-16T12:19:24.268314683Z" level=error msg="ContainerStatus for \"63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343\": not found" Dec 16 12:19:24.268527 kubelet[2636]: E1216 12:19:24.268493 2636 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343\": not found" containerID="63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343" Dec 16 12:19:24.268576 kubelet[2636]: I1216 12:19:24.268526 2636 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343"} err="failed to get container status \"63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343\": rpc error: code = NotFound desc = an error occurred when try to find container \"63497e138792e05f3eefae589cd099cbbebd07477183749066b5ab2003366343\": not found" Dec 16 12:19:24.268576 kubelet[2636]: I1216 12:19:24.268542 2636 scope.go:117] "RemoveContainer" containerID="507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8" Dec 16 12:19:24.268760 containerd[1497]: time="2025-12-16T12:19:24.268714640Z" level=error msg="ContainerStatus for \"507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8\": not found" Dec 16 12:19:24.269033 kubelet[2636]: E1216 12:19:24.268952 2636 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8\": not found" containerID="507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8" Dec 16 12:19:24.269033 kubelet[2636]: I1216 12:19:24.268980 2636 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8"} err="failed to get container status \"507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"507dfece47629f2c26c62fa24386f8e898f6abed2b6fd15434ddab41c70271d8\": not found" Dec 16 12:19:24.269033 kubelet[2636]: I1216 12:19:24.268996 2636 scope.go:117] "RemoveContainer" containerID="6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9" Dec 16 12:19:24.269319 containerd[1497]: time="2025-12-16T12:19:24.269265755Z" level=error msg="ContainerStatus for \"6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9\": not found" Dec 16 12:19:24.269595 kubelet[2636]: E1216 12:19:24.269569 2636 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9\": not found" containerID="6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9" Dec 16 12:19:24.269634 kubelet[2636]: I1216 12:19:24.269599 2636 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9"} err="failed to get container status \"6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"6cc6062cf093a58b3d59265b03acaeb99a2bf0b0d340ee0711b4af47026bc5f9\": not found" Dec 16 12:19:24.269634 kubelet[2636]: I1216 12:19:24.269619 2636 scope.go:117] "RemoveContainer" containerID="c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec" Dec 16 12:19:24.270016 containerd[1497]: time="2025-12-16T12:19:24.269968310Z" level=error msg="ContainerStatus for \"c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec\": not found" Dec 16 12:19:24.270243 kubelet[2636]: E1216 12:19:24.270200 2636 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec\": not found" containerID="c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec" Dec 16 12:19:24.270302 kubelet[2636]: I1216 12:19:24.270248 2636 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec"} err="failed to get container status \"c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec\": rpc error: code = NotFound desc = an error occurred when try to find container \"c79059ddce0ba6e18740f4a9d75be3a3b7a49030a666fbab5ac403540c423eec\": not found" Dec 16 12:19:24.270302 kubelet[2636]: I1216 12:19:24.270268 2636 scope.go:117] "RemoveContainer" containerID="f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464" Dec 16 12:19:24.272629 containerd[1497]: time="2025-12-16T12:19:24.272601930Z" level=info msg="RemoveContainer for \"f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464\"" Dec 16 12:19:24.275892 containerd[1497]: time="2025-12-16T12:19:24.275754825Z" level=info msg="RemoveContainer for \"f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464\" returns successfully" Dec 16 12:19:24.276061 kubelet[2636]: I1216 12:19:24.275981 2636 scope.go:117] "RemoveContainer" containerID="f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464" Dec 16 12:19:24.276328 containerd[1497]: time="2025-12-16T12:19:24.276283141Z" level=error msg="ContainerStatus for \"f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464\": not found" Dec 16 12:19:24.276437 kubelet[2636]: E1216 12:19:24.276415 2636 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464\": not found" containerID="f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464" Dec 16 12:19:24.276465 kubelet[2636]: I1216 12:19:24.276446 2636 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464"} err="failed to get container status \"f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464\": rpc error: code = NotFound desc = an error occurred when try to find container \"f67b2931c4a31ed275a15cf953d57b3eac7e87f073fa7c5c868fa918f0a7c464\": not found" Dec 16 12:19:24.468020 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91d4ad7d5d47e16501f98d5fcebf01254b3f86119f04f38ed20f5f0830a20b8d-shm.mount: Deactivated successfully. Dec 16 12:19:24.468133 systemd[1]: var-lib-kubelet-pods-44f78a6c\x2db473\x2d4194\x2dbec2\x2d350576799125-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6jdlp.mount: Deactivated successfully. Dec 16 12:19:24.468215 systemd[1]: var-lib-kubelet-pods-77a3d05c\x2ddc8c\x2d40bd\x2dab91\x2d8f3b25fd62c0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlvpsn.mount: Deactivated successfully. Dec 16 12:19:24.468287 systemd[1]: var-lib-kubelet-pods-44f78a6c\x2db473\x2d4194\x2dbec2\x2d350576799125-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 16 12:19:24.468335 systemd[1]: var-lib-kubelet-pods-44f78a6c\x2db473\x2d4194\x2dbec2\x2d350576799125-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 16 12:19:24.964180 kubelet[2636]: I1216 12:19:24.964125 2636 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44f78a6c-b473-4194-bec2-350576799125" path="/var/lib/kubelet/pods/44f78a6c-b473-4194-bec2-350576799125/volumes" Dec 16 12:19:24.964885 kubelet[2636]: I1216 12:19:24.964863 2636 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0" path="/var/lib/kubelet/pods/77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0/volumes" Dec 16 12:19:25.336389 sshd[4266]: Connection closed by 10.0.0.1 port 34510 Dec 16 12:19:25.337062 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:25.346822 systemd[1]: sshd@23-10.0.0.13:22-10.0.0.1:34510.service: Deactivated successfully. Dec 16 12:19:25.349017 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:19:25.349977 systemd[1]: session-24.scope: Consumed 1.348s CPU time, 24.4M memory peak. Dec 16 12:19:25.350677 systemd-logind[1474]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:19:25.353125 systemd[1]: Started sshd@24-10.0.0.13:22-10.0.0.1:34524.service - OpenSSH per-connection server daemon (10.0.0.1:34524). Dec 16 12:19:25.355237 systemd-logind[1474]: Removed session 24. Dec 16 12:19:25.420790 sshd[4413]: Accepted publickey for core from 10.0.0.1 port 34524 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:19:25.421963 sshd-session[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:25.430058 systemd-logind[1474]: New session 25 of user core. Dec 16 12:19:25.445096 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:19:26.031051 kubelet[2636]: E1216 12:19:26.030996 2636 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 12:19:26.640810 sshd[4416]: Connection closed by 10.0.0.1 port 34524 Dec 16 12:19:26.638728 sshd-session[4413]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:26.651008 systemd[1]: sshd@24-10.0.0.13:22-10.0.0.1:34524.service: Deactivated successfully. Dec 16 12:19:26.654324 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:19:26.655956 systemd[1]: session-25.scope: Consumed 1.055s CPU time, 23.6M memory peak. Dec 16 12:19:26.658987 systemd-logind[1474]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:19:26.663160 systemd[1]: Started sshd@25-10.0.0.13:22-10.0.0.1:34530.service - OpenSSH per-connection server daemon (10.0.0.1:34530). Dec 16 12:19:26.669720 systemd-logind[1474]: Removed session 25. Dec 16 12:19:26.670263 kubelet[2636]: I1216 12:19:26.669973 2636 memory_manager.go:355] "RemoveStaleState removing state" podUID="44f78a6c-b473-4194-bec2-350576799125" containerName="cilium-agent" Dec 16 12:19:26.670263 kubelet[2636]: I1216 12:19:26.670001 2636 memory_manager.go:355] "RemoveStaleState removing state" podUID="77a3d05c-dc8c-40bd-ab91-8f3b25fd62c0" containerName="cilium-operator" Dec 16 12:19:26.687612 systemd[1]: Created slice kubepods-burstable-poda8ff10b8_a86b_4f84_9dca_7bf731708e42.slice - libcontainer container kubepods-burstable-poda8ff10b8_a86b_4f84_9dca_7bf731708e42.slice. Dec 16 12:19:26.739193 sshd[4429]: Accepted publickey for core from 10.0.0.1 port 34530 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:19:26.740540 sshd-session[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:26.745185 systemd-logind[1474]: New session 26 of user core. Dec 16 12:19:26.755040 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 12:19:26.771027 kubelet[2636]: I1216 12:19:26.770922 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8ff10b8-a86b-4f84-9dca-7bf731708e42-cilium-cgroup\") pod \"cilium-ldxs5\" (UID: \"a8ff10b8-a86b-4f84-9dca-7bf731708e42\") " pod="kube-system/cilium-ldxs5" Dec 16 12:19:26.771027 kubelet[2636]: I1216 12:19:26.770966 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8ff10b8-a86b-4f84-9dca-7bf731708e42-etc-cni-netd\") pod \"cilium-ldxs5\" (UID: \"a8ff10b8-a86b-4f84-9dca-7bf731708e42\") " pod="kube-system/cilium-ldxs5" Dec 16 12:19:26.771027 kubelet[2636]: I1216 12:19:26.770988 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8ff10b8-a86b-4f84-9dca-7bf731708e42-cilium-config-path\") pod \"cilium-ldxs5\" (UID: \"a8ff10b8-a86b-4f84-9dca-7bf731708e42\") " pod="kube-system/cilium-ldxs5" Dec 16 12:19:26.771027 kubelet[2636]: I1216 12:19:26.771006 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8ff10b8-a86b-4f84-9dca-7bf731708e42-clustermesh-secrets\") pod \"cilium-ldxs5\" (UID: \"a8ff10b8-a86b-4f84-9dca-7bf731708e42\") " pod="kube-system/cilium-ldxs5" Dec 16 12:19:26.771249 kubelet[2636]: I1216 12:19:26.771068 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8ff10b8-a86b-4f84-9dca-7bf731708e42-cni-path\") pod \"cilium-ldxs5\" (UID: \"a8ff10b8-a86b-4f84-9dca-7bf731708e42\") " pod="kube-system/cilium-ldxs5" Dec 16 12:19:26.771249 kubelet[2636]: I1216 12:19:26.771131 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkmw7\" (UniqueName: \"kubernetes.io/projected/a8ff10b8-a86b-4f84-9dca-7bf731708e42-kube-api-access-mkmw7\") pod \"cilium-ldxs5\" (UID: \"a8ff10b8-a86b-4f84-9dca-7bf731708e42\") " pod="kube-system/cilium-ldxs5" Dec 16 12:19:26.771249 kubelet[2636]: I1216 12:19:26.771164 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8ff10b8-a86b-4f84-9dca-7bf731708e42-hostproc\") pod \"cilium-ldxs5\" (UID: \"a8ff10b8-a86b-4f84-9dca-7bf731708e42\") " pod="kube-system/cilium-ldxs5" Dec 16 12:19:26.771249 kubelet[2636]: I1216 12:19:26.771202 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8ff10b8-a86b-4f84-9dca-7bf731708e42-lib-modules\") pod \"cilium-ldxs5\" (UID: \"a8ff10b8-a86b-4f84-9dca-7bf731708e42\") " pod="kube-system/cilium-ldxs5" Dec 16 12:19:26.771249 kubelet[2636]: I1216 12:19:26.771218 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8ff10b8-a86b-4f84-9dca-7bf731708e42-xtables-lock\") pod \"cilium-ldxs5\" (UID: \"a8ff10b8-a86b-4f84-9dca-7bf731708e42\") " pod="kube-system/cilium-ldxs5" Dec 16 12:19:26.771249 kubelet[2636]: I1216 12:19:26.771234 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8ff10b8-a86b-4f84-9dca-7bf731708e42-bpf-maps\") pod \"cilium-ldxs5\" (UID: \"a8ff10b8-a86b-4f84-9dca-7bf731708e42\") " pod="kube-system/cilium-ldxs5" Dec 16 12:19:26.771465 kubelet[2636]: I1216 12:19:26.771290 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8ff10b8-a86b-4f84-9dca-7bf731708e42-host-proc-sys-kernel\") pod \"cilium-ldxs5\" (UID: \"a8ff10b8-a86b-4f84-9dca-7bf731708e42\") " pod="kube-system/cilium-ldxs5" Dec 16 12:19:26.771465 kubelet[2636]: I1216 12:19:26.771341 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8ff10b8-a86b-4f84-9dca-7bf731708e42-cilium-run\") pod \"cilium-ldxs5\" (UID: \"a8ff10b8-a86b-4f84-9dca-7bf731708e42\") " pod="kube-system/cilium-ldxs5" Dec 16 12:19:26.771465 kubelet[2636]: I1216 12:19:26.771364 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8ff10b8-a86b-4f84-9dca-7bf731708e42-host-proc-sys-net\") pod \"cilium-ldxs5\" (UID: \"a8ff10b8-a86b-4f84-9dca-7bf731708e42\") " pod="kube-system/cilium-ldxs5" Dec 16 12:19:26.771465 kubelet[2636]: I1216 12:19:26.771380 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8ff10b8-a86b-4f84-9dca-7bf731708e42-hubble-tls\") pod \"cilium-ldxs5\" (UID: \"a8ff10b8-a86b-4f84-9dca-7bf731708e42\") " pod="kube-system/cilium-ldxs5" Dec 16 12:19:26.771465 kubelet[2636]: I1216 12:19:26.771438 2636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a8ff10b8-a86b-4f84-9dca-7bf731708e42-cilium-ipsec-secrets\") pod \"cilium-ldxs5\" (UID: \"a8ff10b8-a86b-4f84-9dca-7bf731708e42\") " pod="kube-system/cilium-ldxs5" Dec 16 12:19:26.807331 sshd[4433]: Connection closed by 10.0.0.1 port 34530 Dec 16 12:19:26.807932 sshd-session[4429]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:26.820541 systemd[1]: sshd@25-10.0.0.13:22-10.0.0.1:34530.service: Deactivated successfully. Dec 16 12:19:26.824249 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 12:19:26.826434 systemd-logind[1474]: Session 26 logged out. Waiting for processes to exit. Dec 16 12:19:26.835502 systemd[1]: Started sshd@26-10.0.0.13:22-10.0.0.1:34552.service - OpenSSH per-connection server daemon (10.0.0.1:34552). Dec 16 12:19:26.836294 systemd-logind[1474]: Removed session 26. Dec 16 12:19:26.906803 sshd[4440]: Accepted publickey for core from 10.0.0.1 port 34552 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:19:26.908720 sshd-session[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:26.912982 systemd-logind[1474]: New session 27 of user core. Dec 16 12:19:26.924062 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 12:19:26.992875 containerd[1497]: time="2025-12-16T12:19:26.992459062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ldxs5,Uid:a8ff10b8-a86b-4f84-9dca-7bf731708e42,Namespace:kube-system,Attempt:0,}" Dec 16 12:19:27.007071 containerd[1497]: time="2025-12-16T12:19:27.007011755Z" level=info msg="connecting to shim 87f98c20c04c31d1d2f657e3a0fe35cb9eb7337a9f04bae9d2e3d97eef5f8746" address="unix:///run/containerd/s/57c2a453e8e9578caf222d0e6244cbcfa64e0fa10709692b1c657f89e49d6032" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:19:27.029072 systemd[1]: Started cri-containerd-87f98c20c04c31d1d2f657e3a0fe35cb9eb7337a9f04bae9d2e3d97eef5f8746.scope - libcontainer container 87f98c20c04c31d1d2f657e3a0fe35cb9eb7337a9f04bae9d2e3d97eef5f8746. Dec 16 12:19:27.061482 containerd[1497]: time="2025-12-16T12:19:27.061436982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ldxs5,Uid:a8ff10b8-a86b-4f84-9dca-7bf731708e42,Namespace:kube-system,Attempt:0,} returns sandbox id \"87f98c20c04c31d1d2f657e3a0fe35cb9eb7337a9f04bae9d2e3d97eef5f8746\"" Dec 16 12:19:27.066664 containerd[1497]: time="2025-12-16T12:19:27.066559602Z" level=info msg="CreateContainer within sandbox \"87f98c20c04c31d1d2f657e3a0fe35cb9eb7337a9f04bae9d2e3d97eef5f8746\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 12:19:27.073480 containerd[1497]: time="2025-12-16T12:19:27.073434935Z" level=info msg="Container 13e32a7739fb521d6e71949bee749fe491319c5a1e062db69c45fe31b73764a7: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:19:27.079921 containerd[1497]: time="2025-12-16T12:19:27.079874990Z" level=info msg="CreateContainer within sandbox \"87f98c20c04c31d1d2f657e3a0fe35cb9eb7337a9f04bae9d2e3d97eef5f8746\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"13e32a7739fb521d6e71949bee749fe491319c5a1e062db69c45fe31b73764a7\"" Dec 16 12:19:27.081308 containerd[1497]: time="2025-12-16T12:19:27.080898946Z" level=info msg="StartContainer for \"13e32a7739fb521d6e71949bee749fe491319c5a1e062db69c45fe31b73764a7\"" Dec 16 12:19:27.082086 containerd[1497]: time="2025-12-16T12:19:27.082057821Z" level=info msg="connecting to shim 13e32a7739fb521d6e71949bee749fe491319c5a1e062db69c45fe31b73764a7" address="unix:///run/containerd/s/57c2a453e8e9578caf222d0e6244cbcfa64e0fa10709692b1c657f89e49d6032" protocol=ttrpc version=3 Dec 16 12:19:27.102057 systemd[1]: Started cri-containerd-13e32a7739fb521d6e71949bee749fe491319c5a1e062db69c45fe31b73764a7.scope - libcontainer container 13e32a7739fb521d6e71949bee749fe491319c5a1e062db69c45fe31b73764a7. Dec 16 12:19:27.136248 containerd[1497]: time="2025-12-16T12:19:27.136210689Z" level=info msg="StartContainer for \"13e32a7739fb521d6e71949bee749fe491319c5a1e062db69c45fe31b73764a7\" returns successfully" Dec 16 12:19:27.144260 systemd[1]: cri-containerd-13e32a7739fb521d6e71949bee749fe491319c5a1e062db69c45fe31b73764a7.scope: Deactivated successfully. Dec 16 12:19:27.147622 containerd[1497]: time="2025-12-16T12:19:27.147577605Z" level=info msg="received container exit event container_id:\"13e32a7739fb521d6e71949bee749fe491319c5a1e062db69c45fe31b73764a7\" id:\"13e32a7739fb521d6e71949bee749fe491319c5a1e062db69c45fe31b73764a7\" pid:4512 exited_at:{seconds:1765887567 nanos:147235326}" Dec 16 12:19:27.231410 containerd[1497]: time="2025-12-16T12:19:27.231267477Z" level=info msg="CreateContainer within sandbox \"87f98c20c04c31d1d2f657e3a0fe35cb9eb7337a9f04bae9d2e3d97eef5f8746\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 12:19:27.242705 containerd[1497]: time="2025-12-16T12:19:27.242646992Z" level=info msg="Container 2a22d13b93b8549ef7a049f4b41c35beffad17ec2a79ee1fa099411d89c09599: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:19:27.250543 containerd[1497]: time="2025-12-16T12:19:27.250495202Z" level=info msg="CreateContainer within sandbox \"87f98c20c04c31d1d2f657e3a0fe35cb9eb7337a9f04bae9d2e3d97eef5f8746\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2a22d13b93b8549ef7a049f4b41c35beffad17ec2a79ee1fa099411d89c09599\"" Dec 16 12:19:27.251276 containerd[1497]: time="2025-12-16T12:19:27.251250879Z" level=info msg="StartContainer for \"2a22d13b93b8549ef7a049f4b41c35beffad17ec2a79ee1fa099411d89c09599\"" Dec 16 12:19:27.252463 containerd[1497]: time="2025-12-16T12:19:27.252349874Z" level=info msg="connecting to shim 2a22d13b93b8549ef7a049f4b41c35beffad17ec2a79ee1fa099411d89c09599" address="unix:///run/containerd/s/57c2a453e8e9578caf222d0e6244cbcfa64e0fa10709692b1c657f89e49d6032" protocol=ttrpc version=3 Dec 16 12:19:27.276060 systemd[1]: Started cri-containerd-2a22d13b93b8549ef7a049f4b41c35beffad17ec2a79ee1fa099411d89c09599.scope - libcontainer container 2a22d13b93b8549ef7a049f4b41c35beffad17ec2a79ee1fa099411d89c09599. Dec 16 12:19:27.310601 containerd[1497]: time="2025-12-16T12:19:27.310561486Z" level=info msg="StartContainer for \"2a22d13b93b8549ef7a049f4b41c35beffad17ec2a79ee1fa099411d89c09599\" returns successfully" Dec 16 12:19:27.317260 systemd[1]: cri-containerd-2a22d13b93b8549ef7a049f4b41c35beffad17ec2a79ee1fa099411d89c09599.scope: Deactivated successfully. Dec 16 12:19:27.318859 containerd[1497]: time="2025-12-16T12:19:27.318765694Z" level=info msg="received container exit event container_id:\"2a22d13b93b8549ef7a049f4b41c35beffad17ec2a79ee1fa099411d89c09599\" id:\"2a22d13b93b8549ef7a049f4b41c35beffad17ec2a79ee1fa099411d89c09599\" pid:4557 exited_at:{seconds:1765887567 nanos:318296896}" Dec 16 12:19:28.238769 containerd[1497]: time="2025-12-16T12:19:28.238722656Z" level=info msg="CreateContainer within sandbox \"87f98c20c04c31d1d2f657e3a0fe35cb9eb7337a9f04bae9d2e3d97eef5f8746\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 12:19:28.265888 containerd[1497]: time="2025-12-16T12:19:28.265472504Z" level=info msg="Container d86018380ede0092678ec8359e51db1d1d1250746de8a08028dc6de453b3e6fd: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:19:28.282055 containerd[1497]: time="2025-12-16T12:19:28.281976499Z" level=info msg="CreateContainer within sandbox \"87f98c20c04c31d1d2f657e3a0fe35cb9eb7337a9f04bae9d2e3d97eef5f8746\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d86018380ede0092678ec8359e51db1d1d1250746de8a08028dc6de453b3e6fd\"" Dec 16 12:19:28.284680 containerd[1497]: time="2025-12-16T12:19:28.283384655Z" level=info msg="StartContainer for \"d86018380ede0092678ec8359e51db1d1d1250746de8a08028dc6de453b3e6fd\"" Dec 16 12:19:28.288714 containerd[1497]: time="2025-12-16T12:19:28.288668241Z" level=info msg="connecting to shim d86018380ede0092678ec8359e51db1d1d1250746de8a08028dc6de453b3e6fd" address="unix:///run/containerd/s/57c2a453e8e9578caf222d0e6244cbcfa64e0fa10709692b1c657f89e49d6032" protocol=ttrpc version=3 Dec 16 12:19:28.325061 systemd[1]: Started cri-containerd-d86018380ede0092678ec8359e51db1d1d1250746de8a08028dc6de453b3e6fd.scope - libcontainer container d86018380ede0092678ec8359e51db1d1d1250746de8a08028dc6de453b3e6fd. Dec 16 12:19:28.403552 containerd[1497]: time="2025-12-16T12:19:28.403513208Z" level=info msg="StartContainer for \"d86018380ede0092678ec8359e51db1d1d1250746de8a08028dc6de453b3e6fd\" returns successfully" Dec 16 12:19:28.406080 systemd[1]: cri-containerd-d86018380ede0092678ec8359e51db1d1d1250746de8a08028dc6de453b3e6fd.scope: Deactivated successfully. Dec 16 12:19:28.409272 containerd[1497]: time="2025-12-16T12:19:28.409029393Z" level=info msg="received container exit event container_id:\"d86018380ede0092678ec8359e51db1d1d1250746de8a08028dc6de453b3e6fd\" id:\"d86018380ede0092678ec8359e51db1d1d1250746de8a08028dc6de453b3e6fd\" pid:4602 exited_at:{seconds:1765887568 nanos:408487035}" Dec 16 12:19:28.430804 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d86018380ede0092678ec8359e51db1d1d1250746de8a08028dc6de453b3e6fd-rootfs.mount: Deactivated successfully. Dec 16 12:19:29.252013 containerd[1497]: time="2025-12-16T12:19:29.251949227Z" level=info msg="CreateContainer within sandbox \"87f98c20c04c31d1d2f657e3a0fe35cb9eb7337a9f04bae9d2e3d97eef5f8746\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 12:19:29.264523 containerd[1497]: time="2025-12-16T12:19:29.263820648Z" level=info msg="Container 90efa8567a90c70ba684cb8dffee46ece69ff87e5fcb3400f59989e772f0f078: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:19:29.269916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1102085069.mount: Deactivated successfully. Dec 16 12:19:29.273704 containerd[1497]: time="2025-12-16T12:19:29.273660753Z" level=info msg="CreateContainer within sandbox \"87f98c20c04c31d1d2f657e3a0fe35cb9eb7337a9f04bae9d2e3d97eef5f8746\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"90efa8567a90c70ba684cb8dffee46ece69ff87e5fcb3400f59989e772f0f078\"" Dec 16 12:19:29.274865 containerd[1497]: time="2025-12-16T12:19:29.274716911Z" level=info msg="StartContainer for \"90efa8567a90c70ba684cb8dffee46ece69ff87e5fcb3400f59989e772f0f078\"" Dec 16 12:19:29.276213 containerd[1497]: time="2025-12-16T12:19:29.276175749Z" level=info msg="connecting to shim 90efa8567a90c70ba684cb8dffee46ece69ff87e5fcb3400f59989e772f0f078" address="unix:///run/containerd/s/57c2a453e8e9578caf222d0e6244cbcfa64e0fa10709692b1c657f89e49d6032" protocol=ttrpc version=3 Dec 16 12:19:29.309081 systemd[1]: Started cri-containerd-90efa8567a90c70ba684cb8dffee46ece69ff87e5fcb3400f59989e772f0f078.scope - libcontainer container 90efa8567a90c70ba684cb8dffee46ece69ff87e5fcb3400f59989e772f0f078. Dec 16 12:19:29.336813 systemd[1]: cri-containerd-90efa8567a90c70ba684cb8dffee46ece69ff87e5fcb3400f59989e772f0f078.scope: Deactivated successfully. Dec 16 12:19:29.341541 containerd[1497]: time="2025-12-16T12:19:29.341253687Z" level=info msg="received container exit event container_id:\"90efa8567a90c70ba684cb8dffee46ece69ff87e5fcb3400f59989e772f0f078\" id:\"90efa8567a90c70ba684cb8dffee46ece69ff87e5fcb3400f59989e772f0f078\" pid:4642 exited_at:{seconds:1765887569 nanos:337827813}" Dec 16 12:19:29.343084 containerd[1497]: time="2025-12-16T12:19:29.343050965Z" level=info msg="StartContainer for \"90efa8567a90c70ba684cb8dffee46ece69ff87e5fcb3400f59989e772f0f078\" returns successfully" Dec 16 12:19:29.349304 containerd[1497]: time="2025-12-16T12:19:29.339107371Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8ff10b8_a86b_4f84_9dca_7bf731708e42.slice/cri-containerd-90efa8567a90c70ba684cb8dffee46ece69ff87e5fcb3400f59989e772f0f078.scope/memory.events\": no such file or directory" Dec 16 12:19:29.365230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90efa8567a90c70ba684cb8dffee46ece69ff87e5fcb3400f59989e772f0f078-rootfs.mount: Deactivated successfully. Dec 16 12:19:30.253087 containerd[1497]: time="2025-12-16T12:19:30.252995709Z" level=info msg="CreateContainer within sandbox \"87f98c20c04c31d1d2f657e3a0fe35cb9eb7337a9f04bae9d2e3d97eef5f8746\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 12:19:30.267319 containerd[1497]: time="2025-12-16T12:19:30.265295623Z" level=info msg="Container f2dcd7d07de3696053b76ef6a6a4037d4099eb9e831efa02f3c3db8cc766c7d1: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:19:30.270958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1906284956.mount: Deactivated successfully. Dec 16 12:19:30.277777 containerd[1497]: time="2025-12-16T12:19:30.277726058Z" level=info msg="CreateContainer within sandbox \"87f98c20c04c31d1d2f657e3a0fe35cb9eb7337a9f04bae9d2e3d97eef5f8746\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f2dcd7d07de3696053b76ef6a6a4037d4099eb9e831efa02f3c3db8cc766c7d1\"" Dec 16 12:19:30.280006 containerd[1497]: time="2025-12-16T12:19:30.279955737Z" level=info msg="StartContainer for \"f2dcd7d07de3696053b76ef6a6a4037d4099eb9e831efa02f3c3db8cc766c7d1\"" Dec 16 12:19:30.281166 containerd[1497]: time="2025-12-16T12:19:30.281132736Z" level=info msg="connecting to shim f2dcd7d07de3696053b76ef6a6a4037d4099eb9e831efa02f3c3db8cc766c7d1" address="unix:///run/containerd/s/57c2a453e8e9578caf222d0e6244cbcfa64e0fa10709692b1c657f89e49d6032" protocol=ttrpc version=3 Dec 16 12:19:30.311076 systemd[1]: Started cri-containerd-f2dcd7d07de3696053b76ef6a6a4037d4099eb9e831efa02f3c3db8cc766c7d1.scope - libcontainer container f2dcd7d07de3696053b76ef6a6a4037d4099eb9e831efa02f3c3db8cc766c7d1. Dec 16 12:19:30.355264 containerd[1497]: time="2025-12-16T12:19:30.355216224Z" level=info msg="StartContainer for \"f2dcd7d07de3696053b76ef6a6a4037d4099eb9e831efa02f3c3db8cc766c7d1\" returns successfully" Dec 16 12:19:30.629025 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 16 12:19:33.553736 systemd-networkd[1429]: lxc_health: Link UP Dec 16 12:19:33.554811 systemd-networkd[1429]: lxc_health: Gained carrier Dec 16 12:19:34.959977 systemd-networkd[1429]: lxc_health: Gained IPv6LL Dec 16 12:19:35.015916 kubelet[2636]: I1216 12:19:35.015343 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ldxs5" podStartSLOduration=9.015322635 podStartE2EDuration="9.015322635s" podCreationTimestamp="2025-12-16 12:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:19:31.293157136 +0000 UTC m=+90.419551031" watchObservedRunningTime="2025-12-16 12:19:35.015322635 +0000 UTC m=+94.141716490" Dec 16 12:19:39.730804 sshd[4447]: Connection closed by 10.0.0.1 port 34552 Dec 16 12:19:39.732206 sshd-session[4440]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:39.736182 systemd[1]: sshd@26-10.0.0.13:22-10.0.0.1:34552.service: Deactivated successfully. Dec 16 12:19:39.740279 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 12:19:39.743446 systemd-logind[1474]: Session 27 logged out. Waiting for processes to exit. Dec 16 12:19:39.745582 systemd-logind[1474]: Removed session 27.