Dec 12 17:34:14.785441 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 12 17:34:14.785462 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 12 17:34:14.785472 kernel: KASLR enabled Dec 12 17:34:14.785477 kernel: efi: EFI v2.7 by EDK II Dec 12 17:34:14.785482 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Dec 12 17:34:14.785488 kernel: random: crng init done Dec 12 17:34:14.785495 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 12 17:34:14.785553 kernel: secureboot: Secure boot enabled Dec 12 17:34:14.785564 kernel: ACPI: Early table checksum verification disabled Dec 12 17:34:14.785573 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Dec 12 17:34:14.785579 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 12 17:34:14.785585 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:34:14.785591 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:34:14.785596 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:34:14.785604 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:34:14.785611 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:34:14.785617 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:34:14.785623 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:34:14.785629 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:34:14.785635 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:34:14.785641 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 12 17:34:14.785647 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:34:14.785653 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:34:14.785659 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Dec 12 17:34:14.785693 kernel: Zone ranges: Dec 12 17:34:14.785703 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:34:14.785709 kernel: DMA32 empty Dec 12 17:34:14.785715 kernel: Normal empty Dec 12 17:34:14.785720 kernel: Device empty Dec 12 17:34:14.785733 kernel: Movable zone start for each node Dec 12 17:34:14.785740 kernel: Early memory node ranges Dec 12 17:34:14.785746 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Dec 12 17:34:14.785752 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Dec 12 17:34:14.785758 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Dec 12 17:34:14.785764 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Dec 12 17:34:14.785769 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Dec 12 17:34:14.785775 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Dec 12 17:34:14.785800 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Dec 12 17:34:14.785806 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Dec 12 17:34:14.785812 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 12 17:34:14.785822 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:34:14.785828 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 12 17:34:14.785834 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Dec 12 17:34:14.785841 kernel: psci: probing for conduit method from ACPI. Dec 12 17:34:14.785849 kernel: psci: PSCIv1.1 detected in firmware. Dec 12 17:34:14.785855 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:34:14.785861 kernel: psci: Trusted OS migration not required Dec 12 17:34:14.785867 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:34:14.785874 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 12 17:34:14.785880 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:34:14.785887 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:34:14.785893 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 12 17:34:14.785900 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:34:14.785907 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:34:14.785914 kernel: CPU features: detected: Spectre-v4 Dec 12 17:34:14.785920 kernel: CPU features: detected: Spectre-BHB Dec 12 17:34:14.785926 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:34:14.785932 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:34:14.785939 kernel: CPU features: detected: ARM erratum 1418040 Dec 12 17:34:14.785945 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:34:14.785951 kernel: alternatives: applying boot alternatives Dec 12 17:34:14.785959 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:34:14.785965 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:34:14.785972 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:34:14.785980 kernel: Fallback order for Node 0: 0 Dec 12 17:34:14.785986 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 12 17:34:14.785992 kernel: Policy zone: DMA Dec 12 17:34:14.785999 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:34:14.786005 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 12 17:34:14.786011 kernel: software IO TLB: area num 4. Dec 12 17:34:14.786017 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 12 17:34:14.786024 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Dec 12 17:34:14.786030 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 17:34:14.786037 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:34:14.786044 kernel: rcu: RCU event tracing is enabled. Dec 12 17:34:14.786050 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 17:34:14.786058 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:34:14.786065 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:34:14.786071 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:34:14.786077 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 17:34:14.786084 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:34:14.786090 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:34:14.786097 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:34:14.786103 kernel: GICv3: 256 SPIs implemented Dec 12 17:34:14.786109 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:34:14.786115 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:34:14.786122 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 12 17:34:14.786128 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:34:14.786136 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 12 17:34:14.786142 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 12 17:34:14.786148 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:34:14.786155 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:34:14.786161 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 12 17:34:14.786168 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 12 17:34:14.786174 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:34:14.786180 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:34:14.786187 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 12 17:34:14.786193 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 12 17:34:14.786200 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 12 17:34:14.786207 kernel: arm-pv: using stolen time PV Dec 12 17:34:14.786214 kernel: Console: colour dummy device 80x25 Dec 12 17:34:14.786221 kernel: ACPI: Core revision 20240827 Dec 12 17:34:14.786227 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 12 17:34:14.786234 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:34:14.786241 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:34:14.786247 kernel: landlock: Up and running. Dec 12 17:34:14.786253 kernel: SELinux: Initializing. Dec 12 17:34:14.786260 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:34:14.786268 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:34:14.786274 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:34:14.786281 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:34:14.786288 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:34:14.786295 kernel: Remapping and enabling EFI services. Dec 12 17:34:14.786301 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:34:14.786307 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:34:14.786314 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 12 17:34:14.786320 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 12 17:34:14.786329 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:34:14.786340 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 12 17:34:14.786347 kernel: Detected PIPT I-cache on CPU2 Dec 12 17:34:14.786355 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 12 17:34:14.786362 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 12 17:34:14.786369 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:34:14.786376 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 12 17:34:14.786383 kernel: Detected PIPT I-cache on CPU3 Dec 12 17:34:14.786391 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 12 17:34:14.786398 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 12 17:34:14.786405 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:34:14.786411 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 12 17:34:14.786418 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 17:34:14.786425 kernel: SMP: Total of 4 processors activated. Dec 12 17:34:14.786432 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:34:14.786439 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:34:14.786446 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:34:14.786453 kernel: CPU features: detected: Common not Private translations Dec 12 17:34:14.786461 kernel: CPU features: detected: CRC32 instructions Dec 12 17:34:14.786468 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 12 17:34:14.786475 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:34:14.786482 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:34:14.786489 kernel: CPU features: detected: Privileged Access Never Dec 12 17:34:14.786496 kernel: CPU features: detected: RAS Extension Support Dec 12 17:34:14.786503 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:34:14.786509 kernel: alternatives: applying system-wide alternatives Dec 12 17:34:14.786516 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 12 17:34:14.786525 kernel: Memory: 2421668K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 128284K reserved, 16384K cma-reserved) Dec 12 17:34:14.786532 kernel: devtmpfs: initialized Dec 12 17:34:14.786539 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:34:14.786546 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 17:34:14.786553 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:34:14.786560 kernel: 0 pages in range for non-PLT usage Dec 12 17:34:14.786567 kernel: 508400 pages in range for PLT usage Dec 12 17:34:14.786573 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:34:14.786580 kernel: SMBIOS 3.0.0 present. Dec 12 17:34:14.786589 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 12 17:34:14.786596 kernel: DMI: Memory slots populated: 1/1 Dec 12 17:34:14.786603 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:34:14.786610 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:34:14.786617 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:34:14.786624 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:34:14.786631 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:34:14.786638 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Dec 12 17:34:14.786645 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:34:14.786653 kernel: cpuidle: using governor menu Dec 12 17:34:14.786660 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:34:14.786667 kernel: ASID allocator initialised with 32768 entries Dec 12 17:34:14.786698 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:34:14.786708 kernel: Serial: AMBA PL011 UART driver Dec 12 17:34:14.786715 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:34:14.786722 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:34:14.786733 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:34:14.786740 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:34:14.786749 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:34:14.786756 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:34:14.786763 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:34:14.786770 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:34:14.786776 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:34:14.786793 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:34:14.786801 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:34:14.786808 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:34:14.786814 kernel: ACPI: Interpreter enabled Dec 12 17:34:14.786823 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:34:14.786830 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:34:14.786837 kernel: ACPI: CPU0 has been hot-added Dec 12 17:34:14.786844 kernel: ACPI: CPU1 has been hot-added Dec 12 17:34:14.786851 kernel: ACPI: CPU2 has been hot-added Dec 12 17:34:14.786857 kernel: ACPI: CPU3 has been hot-added Dec 12 17:34:14.786864 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:34:14.786912 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:34:14.786921 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 17:34:14.787065 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:34:14.787130 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:34:14.787188 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:34:14.787246 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 12 17:34:14.787303 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 12 17:34:14.787312 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 12 17:34:14.787319 kernel: PCI host bridge to bus 0000:00 Dec 12 17:34:14.787386 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 12 17:34:14.787440 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:34:14.787493 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 12 17:34:14.787544 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 17:34:14.787623 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:34:14.787693 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 17:34:14.787771 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 12 17:34:14.787856 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 12 17:34:14.788031 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 12 17:34:14.788101 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 12 17:34:14.788161 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 12 17:34:14.788220 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 12 17:34:14.788276 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 12 17:34:14.788335 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:34:14.788557 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 12 17:34:14.788574 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:34:14.788582 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:34:14.788589 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:34:14.788596 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:34:14.788603 kernel: iommu: Default domain type: Translated Dec 12 17:34:14.788610 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:34:14.788617 kernel: efivars: Registered efivars operations Dec 12 17:34:14.788628 kernel: vgaarb: loaded Dec 12 17:34:14.788635 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:34:14.788642 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:34:14.788649 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:34:14.788656 kernel: pnp: PnP ACPI init Dec 12 17:34:14.788757 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 12 17:34:14.788769 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:34:14.788776 kernel: NET: Registered PF_INET protocol family Dec 12 17:34:14.788808 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:34:14.788816 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:34:14.788823 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:34:14.788830 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:34:14.788837 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:34:14.788844 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:34:14.788851 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:34:14.788861 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:34:14.788870 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:34:14.788880 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:34:14.788887 kernel: kvm [1]: HYP mode not available Dec 12 17:34:14.788894 kernel: Initialise system trusted keyrings Dec 12 17:34:14.788901 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:34:14.788908 kernel: Key type asymmetric registered Dec 12 17:34:14.788915 kernel: Asymmetric key parser 'x509' registered Dec 12 17:34:14.788922 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:34:14.788929 kernel: io scheduler mq-deadline registered Dec 12 17:34:14.788936 kernel: io scheduler kyber registered Dec 12 17:34:14.788945 kernel: io scheduler bfq registered Dec 12 17:34:14.788952 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:34:14.788959 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:34:14.788966 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:34:14.789036 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 12 17:34:14.789046 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:34:14.789053 kernel: thunder_xcv, ver 1.0 Dec 12 17:34:14.789060 kernel: thunder_bgx, ver 1.0 Dec 12 17:34:14.789067 kernel: nicpf, ver 1.0 Dec 12 17:34:14.789076 kernel: nicvf, ver 1.0 Dec 12 17:34:14.789146 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:34:14.789203 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:34:14 UTC (1765560854) Dec 12 17:34:14.789212 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:34:14.789220 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:34:14.789227 kernel: watchdog: NMI not fully supported Dec 12 17:34:14.789234 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:34:14.789241 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:34:14.789250 kernel: Segment Routing with IPv6 Dec 12 17:34:14.789257 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:34:14.789264 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:34:14.789270 kernel: Key type dns_resolver registered Dec 12 17:34:14.789277 kernel: registered taskstats version 1 Dec 12 17:34:14.789284 kernel: Loading compiled-in X.509 certificates Dec 12 17:34:14.789291 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 12 17:34:14.789298 kernel: Demotion targets for Node 0: null Dec 12 17:34:14.789305 kernel: Key type .fscrypt registered Dec 12 17:34:14.789313 kernel: Key type fscrypt-provisioning registered Dec 12 17:34:14.789320 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:34:14.789327 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:34:14.789334 kernel: ima: No architecture policies found Dec 12 17:34:14.789341 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:34:14.789407 kernel: clk: Disabling unused clocks Dec 12 17:34:14.789417 kernel: PM: genpd: Disabling unused power domains Dec 12 17:34:14.789424 kernel: Warning: unable to open an initial console. Dec 12 17:34:14.789431 kernel: Freeing unused kernel memory: 39552K Dec 12 17:34:14.789490 kernel: Run /init as init process Dec 12 17:34:14.789606 kernel: with arguments: Dec 12 17:34:14.789622 kernel: /init Dec 12 17:34:14.789629 kernel: with environment: Dec 12 17:34:14.789636 kernel: HOME=/ Dec 12 17:34:14.789643 kernel: TERM=linux Dec 12 17:34:14.789651 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:34:14.789662 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:34:14.789674 systemd[1]: Detected virtualization kvm. Dec 12 17:34:14.789682 systemd[1]: Detected architecture arm64. Dec 12 17:34:14.789689 systemd[1]: Running in initrd. Dec 12 17:34:14.789696 systemd[1]: No hostname configured, using default hostname. Dec 12 17:34:14.789704 systemd[1]: Hostname set to . Dec 12 17:34:14.789711 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:34:14.789719 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:34:14.789764 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:34:14.789780 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:34:14.789803 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:34:14.789811 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:34:14.789818 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:34:14.789827 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:34:14.789835 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 17:34:14.789845 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 17:34:14.789853 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:34:14.789860 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:34:14.789868 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:34:14.789875 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:34:14.789883 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:34:14.789891 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:34:14.789898 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:34:14.789905 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:34:14.789914 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:34:14.789922 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:34:14.789930 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:34:14.789937 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:34:14.789945 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:34:14.789952 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:34:14.789960 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:34:14.789967 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:34:14.789976 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:34:14.789985 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:34:14.789992 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:34:14.790000 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:34:14.790007 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:34:14.790015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:34:14.790022 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:34:14.790032 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:34:14.790039 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:34:14.790051 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:34:14.790085 systemd-journald[246]: Collecting audit messages is disabled. Dec 12 17:34:14.790106 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:34:14.790115 systemd-journald[246]: Journal started Dec 12 17:34:14.790133 systemd-journald[246]: Runtime Journal (/run/log/journal/66fa825b725c4680b38a1e59363d0adf) is 6M, max 48.5M, 42.4M free. Dec 12 17:34:14.780148 systemd-modules-load[247]: Inserted module 'overlay' Dec 12 17:34:14.792878 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:34:14.795767 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:34:14.797197 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:34:14.799833 kernel: Bridge firewalling registered Dec 12 17:34:14.798736 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:34:14.798868 systemd-modules-load[247]: Inserted module 'br_netfilter' Dec 12 17:34:14.799934 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:34:14.816973 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:34:14.819915 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:34:14.823346 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:34:14.825439 systemd-tmpfiles[265]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:34:14.828481 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:34:14.831673 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:34:14.834038 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:34:14.835846 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:34:14.838054 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:34:14.846354 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:34:14.861991 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:34:14.875337 systemd-resolved[286]: Positive Trust Anchors: Dec 12 17:34:14.875354 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:34:14.875385 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:34:14.880674 systemd-resolved[286]: Defaulting to hostname 'linux'. Dec 12 17:34:14.881652 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:34:14.884499 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:34:14.933829 kernel: SCSI subsystem initialized Dec 12 17:34:14.939809 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:34:14.946809 kernel: iscsi: registered transport (tcp) Dec 12 17:34:14.959807 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:34:14.959828 kernel: QLogic iSCSI HBA Driver Dec 12 17:34:14.976380 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:34:15.009675 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:34:15.011166 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:34:15.067211 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:34:15.069459 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:34:15.129881 kernel: raid6: neonx8 gen() 15634 MB/s Dec 12 17:34:15.146026 kernel: raid6: neonx4 gen() 11572 MB/s Dec 12 17:34:15.164013 kernel: raid6: neonx2 gen() 9347 MB/s Dec 12 17:34:15.181841 kernel: raid6: neonx1 gen() 8166 MB/s Dec 12 17:34:15.198851 kernel: raid6: int64x8 gen() 5554 MB/s Dec 12 17:34:15.215817 kernel: raid6: int64x4 gen() 7284 MB/s Dec 12 17:34:15.232837 kernel: raid6: int64x2 gen() 5972 MB/s Dec 12 17:34:15.250768 kernel: raid6: int64x1 gen() 4703 MB/s Dec 12 17:34:15.250849 kernel: raid6: using algorithm neonx8 gen() 15634 MB/s Dec 12 17:34:15.266837 kernel: raid6: .... xor() 12050 MB/s, rmw enabled Dec 12 17:34:15.266899 kernel: raid6: using neon recovery algorithm Dec 12 17:34:15.272252 kernel: xor: measuring software checksum speed Dec 12 17:34:15.272270 kernel: 8regs : 20950 MB/sec Dec 12 17:34:15.272869 kernel: 32regs : 21664 MB/sec Dec 12 17:34:15.273902 kernel: arm64_neon : 28099 MB/sec Dec 12 17:34:15.273918 kernel: xor: using function: arm64_neon (28099 MB/sec) Dec 12 17:34:15.327818 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:34:15.334265 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:34:15.338219 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:34:15.367519 systemd-udevd[499]: Using default interface naming scheme 'v255'. Dec 12 17:34:15.371601 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:34:15.373422 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:34:15.405119 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Dec 12 17:34:15.428862 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:34:15.430990 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:34:15.496358 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:34:15.498708 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:34:15.567979 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 12 17:34:15.569816 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 12 17:34:15.570591 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:34:15.574338 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:34:15.574357 kernel: GPT:9289727 != 19775487 Dec 12 17:34:15.574367 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:34:15.574376 kernel: GPT:9289727 != 19775487 Dec 12 17:34:15.574384 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:34:15.574393 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:34:15.570714 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:34:15.576422 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:34:15.578411 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:34:15.603775 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 17:34:15.610824 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:34:15.617825 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:34:15.624513 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 17:34:15.625620 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 12 17:34:15.633921 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 17:34:15.641115 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:34:15.642140 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:34:15.643935 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:34:15.645697 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:34:15.648365 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:34:15.650031 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:34:15.663512 disk-uuid[591]: Primary Header is updated. Dec 12 17:34:15.663512 disk-uuid[591]: Secondary Entries is updated. Dec 12 17:34:15.663512 disk-uuid[591]: Secondary Header is updated. Dec 12 17:34:15.668811 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:34:15.670821 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:34:16.680819 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:34:16.684002 disk-uuid[597]: The operation has completed successfully. Dec 12 17:34:16.709959 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:34:16.710084 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:34:16.732269 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 17:34:16.757896 sh[611]: Success Dec 12 17:34:16.770102 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:34:16.770139 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:34:16.771134 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:34:16.777823 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:34:16.811500 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:34:16.813209 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 17:34:16.821139 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 17:34:16.830803 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (623) Dec 12 17:34:16.832842 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 12 17:34:16.832870 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:34:16.838805 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:34:16.838831 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:34:16.839592 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 17:34:16.840815 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:34:16.842017 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:34:16.842771 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:34:16.845845 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:34:16.871583 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (654) Dec 12 17:34:16.871621 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:34:16.871632 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:34:16.876091 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:34:16.876129 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:34:16.879824 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:34:16.881432 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:34:16.883335 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:34:16.947905 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:34:16.954128 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:34:16.988680 ignition[701]: Ignition 2.22.0 Dec 12 17:34:16.988696 ignition[701]: Stage: fetch-offline Dec 12 17:34:16.988740 ignition[701]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:34:16.988750 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:34:16.988845 ignition[701]: parsed url from cmdline: "" Dec 12 17:34:16.988848 ignition[701]: no config URL provided Dec 12 17:34:16.988853 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:34:16.988860 ignition[701]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:34:16.988879 ignition[701]: op(1): [started] loading QEMU firmware config module Dec 12 17:34:16.994509 systemd-networkd[801]: lo: Link UP Dec 12 17:34:16.988883 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 12 17:34:16.994513 systemd-networkd[801]: lo: Gained carrier Dec 12 17:34:16.994971 ignition[701]: op(1): [finished] loading QEMU firmware config module Dec 12 17:34:16.995255 systemd-networkd[801]: Enumeration completed Dec 12 17:34:16.995631 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:34:16.995635 systemd-networkd[801]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:34:16.996195 systemd-networkd[801]: eth0: Link UP Dec 12 17:34:16.996469 systemd-networkd[801]: eth0: Gained carrier Dec 12 17:34:16.996478 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:34:16.997893 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:34:16.999135 systemd[1]: Reached target network.target - Network. Dec 12 17:34:17.021839 systemd-networkd[801]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:34:17.045999 ignition[701]: parsing config with SHA512: c8ec6a7c550b6a354a2ef3bf087a7b22c6d9dc3f459a2007251d8b33e9ddaa1df47d1aed9c458816a18cf93915099ac7ad706b30f1bb0bbcb41dc702c114988f Dec 12 17:34:17.050214 unknown[701]: fetched base config from "system" Dec 12 17:34:17.050225 unknown[701]: fetched user config from "qemu" Dec 12 17:34:17.050586 ignition[701]: fetch-offline: fetch-offline passed Dec 12 17:34:17.050637 ignition[701]: Ignition finished successfully Dec 12 17:34:17.055138 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:34:17.057150 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 12 17:34:17.059041 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:34:17.091672 ignition[812]: Ignition 2.22.0 Dec 12 17:34:17.091688 ignition[812]: Stage: kargs Dec 12 17:34:17.091841 ignition[812]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:34:17.091851 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:34:17.092568 ignition[812]: kargs: kargs passed Dec 12 17:34:17.092608 ignition[812]: Ignition finished successfully Dec 12 17:34:17.096240 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:34:17.098623 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:34:17.125199 ignition[819]: Ignition 2.22.0 Dec 12 17:34:17.125213 ignition[819]: Stage: disks Dec 12 17:34:17.125338 ignition[819]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:34:17.125346 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:34:17.126085 ignition[819]: disks: disks passed Dec 12 17:34:17.128438 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:34:17.126128 ignition[819]: Ignition finished successfully Dec 12 17:34:17.129972 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:34:17.131460 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:34:17.132988 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:34:17.134655 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:34:17.136515 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:34:17.138880 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:34:17.162204 systemd-fsck[829]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 17:34:17.166626 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:34:17.169387 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:34:17.235809 kernel: EXT4-fs (vda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 12 17:34:17.236244 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:34:17.237383 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:34:17.239568 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:34:17.241209 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:34:17.242082 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:34:17.242123 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:34:17.242146 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:34:17.259624 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:34:17.262453 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:34:17.265726 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (837) Dec 12 17:34:17.265748 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:34:17.265770 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:34:17.269123 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:34:17.269158 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:34:17.270885 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:34:17.298146 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:34:17.302069 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:34:17.306353 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:34:17.310299 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:34:17.378540 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:34:17.382094 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:34:17.383569 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:34:17.401809 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:34:17.423106 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:34:17.438508 ignition[951]: INFO : Ignition 2.22.0 Dec 12 17:34:17.438508 ignition[951]: INFO : Stage: mount Dec 12 17:34:17.440059 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:34:17.440059 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:34:17.440059 ignition[951]: INFO : mount: mount passed Dec 12 17:34:17.440059 ignition[951]: INFO : Ignition finished successfully Dec 12 17:34:17.441036 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:34:17.443513 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:34:17.830769 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:34:17.832203 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:34:17.851417 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (964) Dec 12 17:34:17.851459 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:34:17.851471 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:34:17.854926 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:34:17.854975 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:34:17.856387 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:34:17.886366 ignition[981]: INFO : Ignition 2.22.0 Dec 12 17:34:17.886366 ignition[981]: INFO : Stage: files Dec 12 17:34:17.888059 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:34:17.888059 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:34:17.888059 ignition[981]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:34:17.890948 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:34:17.890948 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:34:17.890948 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:34:17.890948 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:34:17.890948 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:34:17.890295 unknown[981]: wrote ssh authorized keys file for user: core Dec 12 17:34:17.897706 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:34:17.897706 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 12 17:34:17.946118 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:34:18.127299 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:34:18.127299 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 17:34:18.130909 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 12 17:34:18.297582 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 12 17:34:18.372036 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 17:34:18.373642 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:34:18.373642 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:34:18.373642 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:34:18.373642 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:34:18.373642 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:34:18.373642 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:34:18.373642 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:34:18.373642 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:34:18.386159 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:34:18.386159 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:34:18.386159 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:34:18.386159 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:34:18.386159 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:34:18.386159 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 12 17:34:18.629262 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 12 17:34:18.787920 systemd-networkd[801]: eth0: Gained IPv6LL Dec 12 17:34:18.837706 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:34:18.837706 ignition[981]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 12 17:34:18.841231 ignition[981]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:34:18.841231 ignition[981]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:34:18.841231 ignition[981]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 12 17:34:18.841231 ignition[981]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 12 17:34:18.841231 ignition[981]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:34:18.841231 ignition[981]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:34:18.841231 ignition[981]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 12 17:34:18.841231 ignition[981]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 12 17:34:18.858669 ignition[981]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:34:18.861758 ignition[981]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:34:18.862990 ignition[981]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 12 17:34:18.862990 ignition[981]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:34:18.862990 ignition[981]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:34:18.862990 ignition[981]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:34:18.862990 ignition[981]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:34:18.862990 ignition[981]: INFO : files: files passed Dec 12 17:34:18.862990 ignition[981]: INFO : Ignition finished successfully Dec 12 17:34:18.863975 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:34:18.866232 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:34:18.867877 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:34:18.884573 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:34:18.884669 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:34:18.887021 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Dec 12 17:34:18.890668 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:34:18.890668 initrd-setup-root-after-ignition[1012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:34:18.894006 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:34:18.895941 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:34:18.897187 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:34:18.901572 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:34:18.942035 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:34:18.942167 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:34:18.944264 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:34:18.945604 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:34:18.947272 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:34:18.948127 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:34:18.961377 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:34:18.963579 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:34:18.990051 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:34:18.991120 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:34:18.992908 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:34:18.994572 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:34:18.994695 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:34:18.996833 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:34:18.998576 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:34:19.000003 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:34:19.003796 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:34:19.005540 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:34:19.007355 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:34:19.008971 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:34:19.010491 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:34:19.012100 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:34:19.013731 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:34:19.015213 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:34:19.016473 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:34:19.016604 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:34:19.018440 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:34:19.019460 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:34:19.021080 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:34:19.024867 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:34:19.025934 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:34:19.026053 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:34:19.028427 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:34:19.028553 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:34:19.030207 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:34:19.031495 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:34:19.031601 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:34:19.033260 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:34:19.034560 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:34:19.036144 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:34:19.036226 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:34:19.037901 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:34:19.037985 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:34:19.039434 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:34:19.039550 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:34:19.040943 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:34:19.041042 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:34:19.043188 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:34:19.044260 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:34:19.044399 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:34:19.046857 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:34:19.048411 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:34:19.048533 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:34:19.050055 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:34:19.050150 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:34:19.055066 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:34:19.060276 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:34:19.067950 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:34:19.074664 ignition[1037]: INFO : Ignition 2.22.0 Dec 12 17:34:19.074664 ignition[1037]: INFO : Stage: umount Dec 12 17:34:19.076907 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:34:19.076907 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:34:19.076907 ignition[1037]: INFO : umount: umount passed Dec 12 17:34:19.076907 ignition[1037]: INFO : Ignition finished successfully Dec 12 17:34:19.077839 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:34:19.077937 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:34:19.078967 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:34:19.079034 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:34:19.082472 systemd[1]: Stopped target network.target - Network. Dec 12 17:34:19.083281 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:34:19.083368 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:34:19.085471 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:34:19.085522 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:34:19.087641 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:34:19.087697 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:34:19.089861 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:34:19.089909 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:34:19.091408 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:34:19.091459 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:34:19.093883 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:34:19.095258 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:34:19.103379 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:34:19.103505 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:34:19.106320 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 17:34:19.106570 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:34:19.106605 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:34:19.110720 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:34:19.111609 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:34:19.111727 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:34:19.115676 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 17:34:19.115840 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:34:19.116739 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:34:19.116778 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:34:19.119458 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:34:19.120256 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:34:19.120310 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:34:19.121994 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:34:19.122036 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:34:19.126918 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:34:19.126968 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:34:19.128675 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:34:19.132259 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 17:34:19.146374 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:34:19.153944 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:34:19.155219 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:34:19.155253 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:34:19.156781 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:34:19.156831 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:34:19.158365 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:34:19.158410 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:34:19.160583 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:34:19.160623 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:34:19.162835 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:34:19.162882 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:34:19.165907 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:34:19.166741 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:34:19.166812 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:34:19.169503 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:34:19.169544 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:34:19.172194 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:34:19.172236 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:34:19.175407 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:34:19.175881 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:34:19.180498 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:34:19.180589 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:34:19.182501 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:34:19.184556 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:34:19.202208 systemd[1]: Switching root. Dec 12 17:34:19.244104 systemd-journald[246]: Journal stopped Dec 12 17:34:20.012280 systemd-journald[246]: Received SIGTERM from PID 1 (systemd). Dec 12 17:34:20.012325 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:34:20.012337 kernel: SELinux: policy capability open_perms=1 Dec 12 17:34:20.012350 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:34:20.012359 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:34:20.012368 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:34:20.012378 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:34:20.012394 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:34:20.012405 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:34:20.012416 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:34:20.012428 kernel: audit: type=1403 audit(1765560859.441:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 17:34:20.012438 systemd[1]: Successfully loaded SELinux policy in 64.265ms. Dec 12 17:34:20.012455 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.253ms. Dec 12 17:34:20.012467 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:34:20.012477 systemd[1]: Detected virtualization kvm. Dec 12 17:34:20.012487 systemd[1]: Detected architecture arm64. Dec 12 17:34:20.012497 systemd[1]: Detected first boot. Dec 12 17:34:20.012506 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:34:20.012522 zram_generator::config[1086]: No configuration found. Dec 12 17:34:20.012533 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:34:20.012542 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:34:20.012552 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 17:34:20.012562 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:34:20.012572 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:34:20.012582 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:34:20.012592 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:34:20.012604 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:34:20.012614 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:34:20.012624 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:34:20.012634 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:34:20.012645 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:34:20.012655 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:34:20.012665 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:34:20.012675 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:34:20.012685 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:34:20.012697 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:34:20.012717 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:34:20.012730 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:34:20.012740 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:34:20.012750 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:34:20.012761 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:34:20.012771 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:34:20.012791 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:34:20.012804 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:34:20.012819 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:34:20.012829 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:34:20.012839 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:34:20.012849 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:34:20.012859 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:34:20.012870 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:34:20.012880 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:34:20.012890 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:34:20.012902 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:34:20.012912 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:34:20.012922 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:34:20.012932 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:34:20.012942 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:34:20.012952 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:34:20.012962 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:34:20.012972 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:34:20.012982 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:34:20.012993 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:34:20.013003 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:34:20.013014 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:34:20.013024 systemd[1]: Reached target machines.target - Containers. Dec 12 17:34:20.013034 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:34:20.013044 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:34:20.013054 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:34:20.013065 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:34:20.013076 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:34:20.013086 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:34:20.013096 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:34:20.013107 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:34:20.013116 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:34:20.013127 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:34:20.013137 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:34:20.013148 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:34:20.013157 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:34:20.013169 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:34:20.013179 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:34:20.013189 kernel: fuse: init (API version 7.41) Dec 12 17:34:20.013198 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:34:20.013209 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:34:20.013218 kernel: ACPI: bus type drm_connector registered Dec 12 17:34:20.013228 kernel: loop: module loaded Dec 12 17:34:20.013238 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:34:20.013248 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:34:20.013260 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:34:20.013270 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:34:20.013280 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 17:34:20.013290 systemd[1]: Stopped verity-setup.service. Dec 12 17:34:20.013301 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:34:20.013312 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:34:20.013322 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:34:20.013333 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:34:20.013342 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:34:20.013372 systemd-journald[1154]: Collecting audit messages is disabled. Dec 12 17:34:20.013392 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:34:20.013403 systemd-journald[1154]: Journal started Dec 12 17:34:20.013424 systemd-journald[1154]: Runtime Journal (/run/log/journal/66fa825b725c4680b38a1e59363d0adf) is 6M, max 48.5M, 42.4M free. Dec 12 17:34:19.795923 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:34:19.818816 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 17:34:19.819197 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:34:20.016521 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:34:20.019453 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:34:20.021810 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:34:20.023381 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:34:20.023544 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:34:20.024887 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:34:20.025058 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:34:20.026425 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:34:20.026590 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:34:20.029225 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:34:20.029392 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:34:20.030925 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:34:20.031109 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:34:20.032369 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:34:20.032543 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:34:20.033848 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:34:20.035299 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:34:20.036738 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:34:20.040227 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:34:20.054834 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:34:20.062614 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:34:20.065157 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:34:20.067502 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:34:20.068935 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:34:20.068972 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:34:20.070816 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:34:20.076811 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:34:20.077899 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:34:20.078922 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:34:20.080778 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:34:20.081970 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:34:20.082903 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:34:20.084077 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:34:20.086323 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:34:20.091335 systemd-journald[1154]: Time spent on flushing to /var/log/journal/66fa825b725c4680b38a1e59363d0adf is 15.296ms for 883 entries. Dec 12 17:34:20.091335 systemd-journald[1154]: System Journal (/var/log/journal/66fa825b725c4680b38a1e59363d0adf) is 8M, max 195.6M, 187.6M free. Dec 12 17:34:20.123955 systemd-journald[1154]: Received client request to flush runtime journal. Dec 12 17:34:20.124018 kernel: loop0: detected capacity change from 0 to 211168 Dec 12 17:34:20.091197 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:34:20.094917 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:34:20.098463 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:34:20.100981 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:34:20.108094 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:34:20.114066 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:34:20.117145 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:34:20.125155 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:34:20.130099 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:34:20.146426 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:34:20.148236 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:34:20.150815 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:34:20.156514 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:34:20.159238 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:34:20.174816 kernel: loop1: detected capacity change from 0 to 119840 Dec 12 17:34:20.184849 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Dec 12 17:34:20.184863 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Dec 12 17:34:20.188549 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:34:20.200804 kernel: loop2: detected capacity change from 0 to 100632 Dec 12 17:34:20.227819 kernel: loop3: detected capacity change from 0 to 211168 Dec 12 17:34:20.244830 kernel: loop4: detected capacity change from 0 to 119840 Dec 12 17:34:20.256824 kernel: loop5: detected capacity change from 0 to 100632 Dec 12 17:34:20.263732 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 12 17:34:20.264166 (sd-merge)[1224]: Merged extensions into '/usr'. Dec 12 17:34:20.269121 systemd[1]: Reload requested from client PID 1203 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:34:20.269140 systemd[1]: Reloading... Dec 12 17:34:20.326830 zram_generator::config[1251]: No configuration found. Dec 12 17:34:20.429515 ldconfig[1198]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:34:20.481557 systemd[1]: Reloading finished in 212 ms. Dec 12 17:34:20.514833 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:34:20.516081 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:34:20.528029 systemd[1]: Starting ensure-sysext.service... Dec 12 17:34:20.529691 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:34:20.539221 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:34:20.539239 systemd[1]: Reloading... Dec 12 17:34:20.546761 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:34:20.547095 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:34:20.547339 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:34:20.547518 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 17:34:20.548152 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 17:34:20.548345 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Dec 12 17:34:20.548386 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Dec 12 17:34:20.551344 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:34:20.551437 systemd-tmpfiles[1286]: Skipping /boot Dec 12 17:34:20.557370 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:34:20.557473 systemd-tmpfiles[1286]: Skipping /boot Dec 12 17:34:20.589862 zram_generator::config[1313]: No configuration found. Dec 12 17:34:20.719539 systemd[1]: Reloading finished in 180 ms. Dec 12 17:34:20.743822 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:34:20.771048 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:34:20.779377 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:34:20.782189 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:34:20.797904 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:34:20.801052 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:34:20.807595 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:34:20.813217 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:34:20.818246 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:34:20.826078 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:34:20.830139 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:34:20.836194 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:34:20.837302 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:34:20.838899 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:34:20.844440 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:34:20.858199 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:34:20.860589 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:34:20.860834 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:34:20.864548 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:34:20.864742 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:34:20.866608 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:34:20.866812 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:34:20.867233 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Dec 12 17:34:20.876503 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:34:20.878163 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:34:20.881935 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:34:20.884519 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:34:20.885741 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:34:20.885921 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:34:20.888633 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:34:20.894036 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:34:20.895922 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:34:20.900747 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:34:20.900938 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:34:20.902271 augenrules[1386]: No rules Dec 12 17:34:20.902662 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:34:20.902884 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:34:20.904362 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:34:20.904843 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:34:20.906435 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:34:20.910923 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:34:20.920888 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:34:20.921139 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:34:20.926447 systemd[1]: Finished ensure-sysext.service. Dec 12 17:34:20.927590 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:34:20.932901 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:34:20.933951 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:34:20.935277 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:34:20.937382 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:34:20.939555 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:34:20.941136 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:34:20.941185 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:34:20.942833 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:34:20.944444 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:34:20.958629 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 17:34:20.959741 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:34:20.964447 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:34:20.964720 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:34:20.966168 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:34:20.966340 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:34:20.969266 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:34:20.980955 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:34:20.981142 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:34:20.986949 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:34:20.991759 augenrules[1428]: /sbin/augenrules: No change Dec 12 17:34:21.003984 augenrules[1458]: No rules Dec 12 17:34:21.006090 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:34:21.006376 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:34:21.044234 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:34:21.046641 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:34:21.071823 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:34:21.127080 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 17:34:21.128712 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:34:21.134611 systemd-networkd[1433]: lo: Link UP Dec 12 17:34:21.134618 systemd-networkd[1433]: lo: Gained carrier Dec 12 17:34:21.135435 systemd-networkd[1433]: Enumeration completed Dec 12 17:34:21.135526 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:34:21.135858 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:34:21.135867 systemd-networkd[1433]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:34:21.136433 systemd-networkd[1433]: eth0: Link UP Dec 12 17:34:21.136542 systemd-networkd[1433]: eth0: Gained carrier Dec 12 17:34:21.136560 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:34:21.138942 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:34:21.142273 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:34:21.160273 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:34:21.160842 systemd-networkd[1433]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:34:21.162834 systemd-timesyncd[1434]: Network configuration changed, trying to establish connection. Dec 12 17:34:21.163305 systemd-resolved[1352]: Positive Trust Anchors: Dec 12 17:34:21.163317 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:34:21.163349 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:34:21.167752 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:34:21.169027 systemd-timesyncd[1434]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 12 17:34:21.169101 systemd-timesyncd[1434]: Initial clock synchronization to Fri 2025-12-12 17:34:21.377571 UTC. Dec 12 17:34:21.174743 systemd-resolved[1352]: Defaulting to hostname 'linux'. Dec 12 17:34:21.176217 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:34:21.179114 systemd[1]: Reached target network.target - Network. Dec 12 17:34:21.179873 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:34:21.217072 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:34:21.218282 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:34:21.219295 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:34:21.220382 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:34:21.221641 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:34:21.222685 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:34:21.223800 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:34:21.224773 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:34:21.224821 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:34:21.225533 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:34:21.227160 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:34:21.229649 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:34:21.232552 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:34:21.233850 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:34:21.234864 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:34:21.237843 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:34:21.239026 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:34:21.240687 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:34:21.241768 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:34:21.242555 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:34:21.243413 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:34:21.243445 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:34:21.244486 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:34:21.246455 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:34:21.248388 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:34:21.250396 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:34:21.252291 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:34:21.253170 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:34:21.254177 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:34:21.257909 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:34:21.259423 jq[1503]: false Dec 12 17:34:21.259765 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:34:21.263164 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:34:21.266994 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:34:21.267254 extend-filesystems[1504]: Found /dev/vda6 Dec 12 17:34:21.268750 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:34:21.269181 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:34:21.269712 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:34:21.273865 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:34:21.273994 extend-filesystems[1504]: Found /dev/vda9 Dec 12 17:34:21.277602 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:34:21.279035 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:34:21.279134 extend-filesystems[1504]: Checking size of /dev/vda9 Dec 12 17:34:21.283986 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:34:21.284398 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:34:21.284596 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:34:21.286603 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:34:21.287278 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:34:21.288157 jq[1521]: true Dec 12 17:34:21.304625 update_engine[1516]: I20251212 17:34:21.304421 1516 main.cc:92] Flatcar Update Engine starting Dec 12 17:34:21.307820 (ntainerd)[1530]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 17:34:21.310581 jq[1529]: true Dec 12 17:34:21.317602 tar[1528]: linux-arm64/LICENSE Dec 12 17:34:21.317602 tar[1528]: linux-arm64/helm Dec 12 17:34:21.317981 systemd-logind[1514]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:34:21.319262 systemd-logind[1514]: New seat seat0. Dec 12 17:34:21.320227 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:34:21.330283 extend-filesystems[1504]: Resized partition /dev/vda9 Dec 12 17:34:21.333397 extend-filesystems[1551]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:34:21.334171 dbus-daemon[1501]: [system] SELinux support is enabled Dec 12 17:34:21.334350 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:34:21.338107 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:34:21.338136 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:34:21.338831 dbus-daemon[1501]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 12 17:34:21.340178 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:34:21.340193 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:34:21.340957 update_engine[1516]: I20251212 17:34:21.340741 1516 update_check_scheduler.cc:74] Next update check in 7m46s Dec 12 17:34:21.342110 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:34:21.345907 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:34:21.365867 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 12 17:34:21.434809 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 12 17:34:21.436558 locksmithd[1555]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:34:21.450767 extend-filesystems[1551]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 17:34:21.450767 extend-filesystems[1551]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 12 17:34:21.450767 extend-filesystems[1551]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 12 17:34:21.454488 extend-filesystems[1504]: Resized filesystem in /dev/vda9 Dec 12 17:34:21.453476 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:34:21.455361 bash[1559]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:34:21.453692 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:34:21.457407 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:34:21.460419 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 17:34:21.524488 containerd[1530]: time="2025-12-12T17:34:21Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:34:21.525482 containerd[1530]: time="2025-12-12T17:34:21.525440120Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 17:34:21.534776 containerd[1530]: time="2025-12-12T17:34:21.534728560Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.12µs" Dec 12 17:34:21.534776 containerd[1530]: time="2025-12-12T17:34:21.534769200Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:34:21.534889 containerd[1530]: time="2025-12-12T17:34:21.534795440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:34:21.536814 containerd[1530]: time="2025-12-12T17:34:21.534974320Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:34:21.536814 containerd[1530]: time="2025-12-12T17:34:21.534996160Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:34:21.536814 containerd[1530]: time="2025-12-12T17:34:21.535021200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:34:21.536814 containerd[1530]: time="2025-12-12T17:34:21.535071800Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:34:21.536814 containerd[1530]: time="2025-12-12T17:34:21.535082720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:34:21.536814 containerd[1530]: time="2025-12-12T17:34:21.535312560Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:34:21.536814 containerd[1530]: time="2025-12-12T17:34:21.535327240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:34:21.536814 containerd[1530]: time="2025-12-12T17:34:21.535339360Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:34:21.536814 containerd[1530]: time="2025-12-12T17:34:21.535346840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:34:21.536814 containerd[1530]: time="2025-12-12T17:34:21.535410640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:34:21.536814 containerd[1530]: time="2025-12-12T17:34:21.535596520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:34:21.537045 containerd[1530]: time="2025-12-12T17:34:21.535622760Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:34:21.537045 containerd[1530]: time="2025-12-12T17:34:21.535631960Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:34:21.537045 containerd[1530]: time="2025-12-12T17:34:21.535679040Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:34:21.537045 containerd[1530]: time="2025-12-12T17:34:21.536033520Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:34:21.537045 containerd[1530]: time="2025-12-12T17:34:21.536104320Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:34:21.542070 containerd[1530]: time="2025-12-12T17:34:21.541901880Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:34:21.542070 containerd[1530]: time="2025-12-12T17:34:21.541985360Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:34:21.542070 containerd[1530]: time="2025-12-12T17:34:21.542000800Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:34:21.542070 containerd[1530]: time="2025-12-12T17:34:21.542013000Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:34:21.542070 containerd[1530]: time="2025-12-12T17:34:21.542026120Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:34:21.542070 containerd[1530]: time="2025-12-12T17:34:21.542042560Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:34:21.542264 containerd[1530]: time="2025-12-12T17:34:21.542248720Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:34:21.542323 containerd[1530]: time="2025-12-12T17:34:21.542310600Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:34:21.542377 containerd[1530]: time="2025-12-12T17:34:21.542365600Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:34:21.542424 containerd[1530]: time="2025-12-12T17:34:21.542413280Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:34:21.542470 containerd[1530]: time="2025-12-12T17:34:21.542458840Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:34:21.542520 containerd[1530]: time="2025-12-12T17:34:21.542508160Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:34:21.542728 containerd[1530]: time="2025-12-12T17:34:21.542691680Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:34:21.542824 containerd[1530]: time="2025-12-12T17:34:21.542805720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:34:21.542902 containerd[1530]: time="2025-12-12T17:34:21.542887200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:34:21.542960 containerd[1530]: time="2025-12-12T17:34:21.542948760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:34:21.543009 containerd[1530]: time="2025-12-12T17:34:21.542996760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:34:21.543053 containerd[1530]: time="2025-12-12T17:34:21.543042680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:34:21.543111 containerd[1530]: time="2025-12-12T17:34:21.543098680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:34:21.543167 containerd[1530]: time="2025-12-12T17:34:21.543155920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:34:21.543213 containerd[1530]: time="2025-12-12T17:34:21.543203160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:34:21.543260 containerd[1530]: time="2025-12-12T17:34:21.543248920Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:34:21.543307 containerd[1530]: time="2025-12-12T17:34:21.543295720Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:34:21.543522 containerd[1530]: time="2025-12-12T17:34:21.543506760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:34:21.543581 containerd[1530]: time="2025-12-12T17:34:21.543570120Z" level=info msg="Start snapshots syncer" Dec 12 17:34:21.543654 containerd[1530]: time="2025-12-12T17:34:21.543641520Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:34:21.544134 containerd[1530]: time="2025-12-12T17:34:21.544092280Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:34:21.544290 containerd[1530]: time="2025-12-12T17:34:21.544273520Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:34:21.544458 containerd[1530]: time="2025-12-12T17:34:21.544394880Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:34:21.544733 containerd[1530]: time="2025-12-12T17:34:21.544707720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:34:21.544830 containerd[1530]: time="2025-12-12T17:34:21.544816520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:34:21.544882 containerd[1530]: time="2025-12-12T17:34:21.544871080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:34:21.544931 containerd[1530]: time="2025-12-12T17:34:21.544919160Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:34:21.544980 containerd[1530]: time="2025-12-12T17:34:21.544968720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:34:21.545028 containerd[1530]: time="2025-12-12T17:34:21.545016520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:34:21.545076 containerd[1530]: time="2025-12-12T17:34:21.545065160Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:34:21.545166 containerd[1530]: time="2025-12-12T17:34:21.545152480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:34:21.545224 containerd[1530]: time="2025-12-12T17:34:21.545212080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:34:21.545275 containerd[1530]: time="2025-12-12T17:34:21.545262760Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:34:21.545398 containerd[1530]: time="2025-12-12T17:34:21.545336080Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:34:21.545398 containerd[1530]: time="2025-12-12T17:34:21.545358880Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:34:21.545398 containerd[1530]: time="2025-12-12T17:34:21.545369520Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:34:21.545536 containerd[1530]: time="2025-12-12T17:34:21.545379200Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:34:21.545587 containerd[1530]: time="2025-12-12T17:34:21.545574920Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:34:21.545641 containerd[1530]: time="2025-12-12T17:34:21.545628800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:34:21.545690 containerd[1530]: time="2025-12-12T17:34:21.545678520Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:34:21.545835 containerd[1530]: time="2025-12-12T17:34:21.545821120Z" level=info msg="runtime interface created" Dec 12 17:34:21.545878 containerd[1530]: time="2025-12-12T17:34:21.545868400Z" level=info msg="created NRI interface" Dec 12 17:34:21.545926 containerd[1530]: time="2025-12-12T17:34:21.545914080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:34:21.546053 containerd[1530]: time="2025-12-12T17:34:21.545970560Z" level=info msg="Connect containerd service" Dec 12 17:34:21.546053 containerd[1530]: time="2025-12-12T17:34:21.546001480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:34:21.548477 containerd[1530]: time="2025-12-12T17:34:21.548446960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:34:21.625858 containerd[1530]: time="2025-12-12T17:34:21.625294160Z" level=info msg="Start subscribing containerd event" Dec 12 17:34:21.625858 containerd[1530]: time="2025-12-12T17:34:21.625358560Z" level=info msg="Start recovering state" Dec 12 17:34:21.625858 containerd[1530]: time="2025-12-12T17:34:21.625453040Z" level=info msg="Start event monitor" Dec 12 17:34:21.625858 containerd[1530]: time="2025-12-12T17:34:21.625466760Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:34:21.625858 containerd[1530]: time="2025-12-12T17:34:21.625473520Z" level=info msg="Start streaming server" Dec 12 17:34:21.625858 containerd[1530]: time="2025-12-12T17:34:21.625482120Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:34:21.625858 containerd[1530]: time="2025-12-12T17:34:21.625489560Z" level=info msg="runtime interface starting up..." Dec 12 17:34:21.625858 containerd[1530]: time="2025-12-12T17:34:21.625495280Z" level=info msg="starting plugins..." Dec 12 17:34:21.625858 containerd[1530]: time="2025-12-12T17:34:21.625508440Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:34:21.626313 containerd[1530]: time="2025-12-12T17:34:21.626293040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:34:21.626494 containerd[1530]: time="2025-12-12T17:34:21.626440680Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:34:21.626747 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:34:21.629559 containerd[1530]: time="2025-12-12T17:34:21.629525560Z" level=info msg="containerd successfully booted in 0.105346s" Dec 12 17:34:21.683365 tar[1528]: linux-arm64/README.md Dec 12 17:34:21.703529 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:34:22.243981 systemd-networkd[1433]: eth0: Gained IPv6LL Dec 12 17:34:22.249865 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:34:22.253238 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:34:22.256418 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 17:34:22.258787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:34:22.264365 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:34:22.288893 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:34:22.293197 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 17:34:22.293400 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 17:34:22.295248 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:34:22.777100 sshd_keygen[1526]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:34:22.799849 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:34:22.802920 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:34:22.823722 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:34:22.824015 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:34:22.826854 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:34:22.855084 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:34:22.857879 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:34:22.860018 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:34:22.861507 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:34:22.890753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:34:22.892364 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:34:22.893603 systemd[1]: Startup finished in 2.070s (kernel) + 4.822s (initrd) + 3.516s (userspace) = 10.408s. Dec 12 17:34:22.895143 (kubelet)[1633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:34:23.281748 kubelet[1633]: E1212 17:34:23.281665 1633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:34:23.284279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:34:23.284419 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:34:23.284749 systemd[1]: kubelet.service: Consumed 762ms CPU time, 256.8M memory peak. Dec 12 17:34:27.687394 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:34:27.688480 systemd[1]: Started sshd@0-10.0.0.71:22-10.0.0.1:34628.service - OpenSSH per-connection server daemon (10.0.0.1:34628). Dec 12 17:34:27.768228 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 34628 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:34:27.772589 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:34:27.782533 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:34:27.783502 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:34:27.791013 systemd-logind[1514]: New session 1 of user core. Dec 12 17:34:27.818147 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:34:27.823331 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:34:27.836999 (systemd)[1651]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:34:27.839165 systemd-logind[1514]: New session c1 of user core. Dec 12 17:34:27.953096 systemd[1651]: Queued start job for default target default.target. Dec 12 17:34:27.968894 systemd[1651]: Created slice app.slice - User Application Slice. Dec 12 17:34:27.968934 systemd[1651]: Reached target paths.target - Paths. Dec 12 17:34:27.968997 systemd[1651]: Reached target timers.target - Timers. Dec 12 17:34:27.970320 systemd[1651]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:34:27.980324 systemd[1651]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:34:27.980390 systemd[1651]: Reached target sockets.target - Sockets. Dec 12 17:34:27.980428 systemd[1651]: Reached target basic.target - Basic System. Dec 12 17:34:27.980455 systemd[1651]: Reached target default.target - Main User Target. Dec 12 17:34:27.980481 systemd[1651]: Startup finished in 135ms. Dec 12 17:34:27.980692 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:34:27.982278 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:34:28.048107 systemd[1]: Started sshd@1-10.0.0.71:22-10.0.0.1:34636.service - OpenSSH per-connection server daemon (10.0.0.1:34636). Dec 12 17:34:28.104300 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 34636 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:34:28.105710 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:34:28.109864 systemd-logind[1514]: New session 2 of user core. Dec 12 17:34:28.118005 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:34:28.170346 sshd[1665]: Connection closed by 10.0.0.1 port 34636 Dec 12 17:34:28.170201 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Dec 12 17:34:28.183768 systemd[1]: sshd@1-10.0.0.71:22-10.0.0.1:34636.service: Deactivated successfully. Dec 12 17:34:28.185191 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:34:28.187339 systemd-logind[1514]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:34:28.189437 systemd[1]: Started sshd@2-10.0.0.71:22-10.0.0.1:34652.service - OpenSSH per-connection server daemon (10.0.0.1:34652). Dec 12 17:34:28.190279 systemd-logind[1514]: Removed session 2. Dec 12 17:34:28.248608 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 34652 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:34:28.250322 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:34:28.253993 systemd-logind[1514]: New session 3 of user core. Dec 12 17:34:28.261977 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:34:28.310857 sshd[1674]: Connection closed by 10.0.0.1 port 34652 Dec 12 17:34:28.311331 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Dec 12 17:34:28.328901 systemd[1]: sshd@2-10.0.0.71:22-10.0.0.1:34652.service: Deactivated successfully. Dec 12 17:34:28.331527 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:34:28.332594 systemd-logind[1514]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:34:28.336078 systemd[1]: Started sshd@3-10.0.0.71:22-10.0.0.1:34666.service - OpenSSH per-connection server daemon (10.0.0.1:34666). Dec 12 17:34:28.336728 systemd-logind[1514]: Removed session 3. Dec 12 17:34:28.397762 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 34666 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:34:28.399153 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:34:28.403886 systemd-logind[1514]: New session 4 of user core. Dec 12 17:34:28.425035 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:34:28.477432 sshd[1684]: Connection closed by 10.0.0.1 port 34666 Dec 12 17:34:28.477914 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Dec 12 17:34:28.493938 systemd[1]: sshd@3-10.0.0.71:22-10.0.0.1:34666.service: Deactivated successfully. Dec 12 17:34:28.496266 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:34:28.497992 systemd-logind[1514]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:34:28.499667 systemd[1]: Started sshd@4-10.0.0.71:22-10.0.0.1:34680.service - OpenSSH per-connection server daemon (10.0.0.1:34680). Dec 12 17:34:28.500993 systemd-logind[1514]: Removed session 4. Dec 12 17:34:28.562327 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 34680 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:34:28.563669 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:34:28.567508 systemd-logind[1514]: New session 5 of user core. Dec 12 17:34:28.579988 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:34:28.637262 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:34:28.637530 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:34:28.649996 sudo[1694]: pam_unix(sudo:session): session closed for user root Dec 12 17:34:28.651823 sshd[1693]: Connection closed by 10.0.0.1 port 34680 Dec 12 17:34:28.652059 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Dec 12 17:34:28.660881 systemd[1]: sshd@4-10.0.0.71:22-10.0.0.1:34680.service: Deactivated successfully. Dec 12 17:34:28.662343 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:34:28.664082 systemd-logind[1514]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:34:28.665076 systemd[1]: Started sshd@5-10.0.0.71:22-10.0.0.1:34696.service - OpenSSH per-connection server daemon (10.0.0.1:34696). Dec 12 17:34:28.665785 systemd-logind[1514]: Removed session 5. Dec 12 17:34:28.733123 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 34696 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:34:28.734437 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:34:28.738258 systemd-logind[1514]: New session 6 of user core. Dec 12 17:34:28.754957 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:34:28.806329 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:34:28.806595 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:34:28.883733 sudo[1705]: pam_unix(sudo:session): session closed for user root Dec 12 17:34:28.888699 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:34:28.889006 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:34:28.900715 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:34:28.944832 augenrules[1727]: No rules Dec 12 17:34:28.946216 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:34:28.946465 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:34:28.948034 sudo[1704]: pam_unix(sudo:session): session closed for user root Dec 12 17:34:28.949728 sshd[1703]: Connection closed by 10.0.0.1 port 34696 Dec 12 17:34:28.949592 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Dec 12 17:34:28.962858 systemd[1]: sshd@5-10.0.0.71:22-10.0.0.1:34696.service: Deactivated successfully. Dec 12 17:34:28.964200 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:34:28.966894 systemd-logind[1514]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:34:28.969064 systemd[1]: Started sshd@6-10.0.0.71:22-10.0.0.1:34708.service - OpenSSH per-connection server daemon (10.0.0.1:34708). Dec 12 17:34:28.972256 systemd-logind[1514]: Removed session 6. Dec 12 17:34:29.020473 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 34708 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:34:29.021632 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:34:29.025837 systemd-logind[1514]: New session 7 of user core. Dec 12 17:34:29.038997 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:34:29.089542 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:34:29.090125 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:34:29.392267 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:34:29.409137 (dockerd)[1760]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:34:29.608155 dockerd[1760]: time="2025-12-12T17:34:29.608081997Z" level=info msg="Starting up" Dec 12 17:34:29.609036 dockerd[1760]: time="2025-12-12T17:34:29.609015056Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:34:29.619487 dockerd[1760]: time="2025-12-12T17:34:29.619430002Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:34:29.656113 dockerd[1760]: time="2025-12-12T17:34:29.655977476Z" level=info msg="Loading containers: start." Dec 12 17:34:29.665217 kernel: Initializing XFRM netlink socket Dec 12 17:34:29.868643 systemd-networkd[1433]: docker0: Link UP Dec 12 17:34:29.873677 dockerd[1760]: time="2025-12-12T17:34:29.873619020Z" level=info msg="Loading containers: done." Dec 12 17:34:29.886984 dockerd[1760]: time="2025-12-12T17:34:29.886923176Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:34:29.887147 dockerd[1760]: time="2025-12-12T17:34:29.887021170Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:34:29.887147 dockerd[1760]: time="2025-12-12T17:34:29.887109061Z" level=info msg="Initializing buildkit" Dec 12 17:34:29.911566 dockerd[1760]: time="2025-12-12T17:34:29.911454633Z" level=info msg="Completed buildkit initialization" Dec 12 17:34:29.916581 dockerd[1760]: time="2025-12-12T17:34:29.916535988Z" level=info msg="Daemon has completed initialization" Dec 12 17:34:29.916845 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:34:29.917134 dockerd[1760]: time="2025-12-12T17:34:29.916733389Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:34:30.470355 containerd[1530]: time="2025-12-12T17:34:30.470308482Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 12 17:34:31.033327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2440394800.mount: Deactivated successfully. Dec 12 17:34:31.961672 containerd[1530]: time="2025-12-12T17:34:31.960985612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:31.962293 containerd[1530]: time="2025-12-12T17:34:31.962249364Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387283" Dec 12 17:34:31.967255 containerd[1530]: time="2025-12-12T17:34:31.967208065Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:31.971395 containerd[1530]: time="2025-12-12T17:34:31.971360572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:31.972150 containerd[1530]: time="2025-12-12T17:34:31.971984507Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.501627607s" Dec 12 17:34:31.972150 containerd[1530]: time="2025-12-12T17:34:31.972027723Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 12 17:34:31.973195 containerd[1530]: time="2025-12-12T17:34:31.973172429Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 12 17:34:32.994686 containerd[1530]: time="2025-12-12T17:34:32.994634676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:32.995726 containerd[1530]: time="2025-12-12T17:34:32.995677503Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553083" Dec 12 17:34:32.996392 containerd[1530]: time="2025-12-12T17:34:32.996351213Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:32.999371 containerd[1530]: time="2025-12-12T17:34:32.999316983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:33.000267 containerd[1530]: time="2025-12-12T17:34:33.000153000Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.026886407s" Dec 12 17:34:33.000267 containerd[1530]: time="2025-12-12T17:34:33.000183085Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 12 17:34:33.000782 containerd[1530]: time="2025-12-12T17:34:33.000757520Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 12 17:34:33.505178 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:34:33.506921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:34:33.639589 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:34:33.656183 (kubelet)[2046]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:34:33.696364 kubelet[2046]: E1212 17:34:33.696294 2046 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:34:33.701840 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:34:33.701975 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:34:33.702498 systemd[1]: kubelet.service: Consumed 154ms CPU time, 105.9M memory peak. Dec 12 17:34:34.168829 containerd[1530]: time="2025-12-12T17:34:34.168766990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:34.170065 containerd[1530]: time="2025-12-12T17:34:34.169418474Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298069" Dec 12 17:34:34.170372 containerd[1530]: time="2025-12-12T17:34:34.170347006Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:34.173055 containerd[1530]: time="2025-12-12T17:34:34.173024799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:34.174132 containerd[1530]: time="2025-12-12T17:34:34.173920761Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.17313172s" Dec 12 17:34:34.174132 containerd[1530]: time="2025-12-12T17:34:34.173952165Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 12 17:34:34.174482 containerd[1530]: time="2025-12-12T17:34:34.174410319Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 12 17:34:35.144267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975445550.mount: Deactivated successfully. Dec 12 17:34:35.417151 containerd[1530]: time="2025-12-12T17:34:35.417020963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:35.417778 containerd[1530]: time="2025-12-12T17:34:35.417697418Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258675" Dec 12 17:34:35.418667 containerd[1530]: time="2025-12-12T17:34:35.418612806Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:35.420798 containerd[1530]: time="2025-12-12T17:34:35.420727401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:35.421416 containerd[1530]: time="2025-12-12T17:34:35.421385010Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.246939595s" Dec 12 17:34:35.421464 containerd[1530]: time="2025-12-12T17:34:35.421422622Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 12 17:34:35.422121 containerd[1530]: time="2025-12-12T17:34:35.421878990Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 12 17:34:36.060657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1431211825.mount: Deactivated successfully. Dec 12 17:34:37.040546 containerd[1530]: time="2025-12-12T17:34:37.040069520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:37.043460 containerd[1530]: time="2025-12-12T17:34:37.043415855Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Dec 12 17:34:37.054997 containerd[1530]: time="2025-12-12T17:34:37.054946892Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:37.068876 containerd[1530]: time="2025-12-12T17:34:37.068765832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:37.069987 containerd[1530]: time="2025-12-12T17:34:37.069932443Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.648021396s" Dec 12 17:34:37.069987 containerd[1530]: time="2025-12-12T17:34:37.069969011Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 12 17:34:37.070686 containerd[1530]: time="2025-12-12T17:34:37.070596010Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 17:34:37.600074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4231896092.mount: Deactivated successfully. Dec 12 17:34:37.604040 containerd[1530]: time="2025-12-12T17:34:37.604006616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:34:37.604504 containerd[1530]: time="2025-12-12T17:34:37.604481481Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 12 17:34:37.605435 containerd[1530]: time="2025-12-12T17:34:37.605389064Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:34:37.608828 containerd[1530]: time="2025-12-12T17:34:37.607829903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:34:37.608828 containerd[1530]: time="2025-12-12T17:34:37.608486968Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 537.862137ms" Dec 12 17:34:37.608828 containerd[1530]: time="2025-12-12T17:34:37.608508363Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 12 17:34:37.609261 containerd[1530]: time="2025-12-12T17:34:37.609204885Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 12 17:34:38.098072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3805803570.mount: Deactivated successfully. Dec 12 17:34:39.506061 containerd[1530]: time="2025-12-12T17:34:39.506012644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:39.507269 containerd[1530]: time="2025-12-12T17:34:39.506986220Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013653" Dec 12 17:34:39.509117 containerd[1530]: time="2025-12-12T17:34:39.509072346Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:39.511833 containerd[1530]: time="2025-12-12T17:34:39.511782990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:34:39.513146 containerd[1530]: time="2025-12-12T17:34:39.513100089Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 1.903848046s" Dec 12 17:34:39.513198 containerd[1530]: time="2025-12-12T17:34:39.513152510Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 12 17:34:43.755388 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 17:34:43.757020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:34:43.908158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:34:43.912596 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:34:43.948127 kubelet[2209]: E1212 17:34:43.948041 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:34:43.950833 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:34:43.951060 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:34:43.951429 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107.8M memory peak. Dec 12 17:34:45.397671 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:34:45.397840 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107.8M memory peak. Dec 12 17:34:45.400018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:34:45.423439 systemd[1]: Reload requested from client PID 2225 ('systemctl') (unit session-7.scope)... Dec 12 17:34:45.423457 systemd[1]: Reloading... Dec 12 17:34:45.504830 zram_generator::config[2268]: No configuration found. Dec 12 17:34:45.714918 systemd[1]: Reloading finished in 291 ms. Dec 12 17:34:45.775410 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 17:34:45.775503 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 17:34:45.775773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:34:45.775848 systemd[1]: kubelet.service: Consumed 95ms CPU time, 94.9M memory peak. Dec 12 17:34:45.777462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:34:45.910603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:34:45.927121 (kubelet)[2312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:34:45.961500 kubelet[2312]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:34:45.961500 kubelet[2312]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:34:45.961500 kubelet[2312]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:34:45.961849 kubelet[2312]: I1212 17:34:45.961537 2312 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:34:46.935209 kubelet[2312]: I1212 17:34:46.935161 2312 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 17:34:46.935209 kubelet[2312]: I1212 17:34:46.935193 2312 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:34:46.935556 kubelet[2312]: I1212 17:34:46.935542 2312 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:34:46.958507 kubelet[2312]: E1212 17:34:46.958456 2312 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 17:34:46.959527 kubelet[2312]: I1212 17:34:46.959495 2312 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:34:46.972017 kubelet[2312]: I1212 17:34:46.971971 2312 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:34:46.975105 kubelet[2312]: I1212 17:34:46.975081 2312 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:34:46.982213 kubelet[2312]: I1212 17:34:46.982155 2312 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:34:46.982375 kubelet[2312]: I1212 17:34:46.982210 2312 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:34:46.982495 kubelet[2312]: I1212 17:34:46.982441 2312 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:34:46.982495 kubelet[2312]: I1212 17:34:46.982450 2312 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 17:34:46.982666 kubelet[2312]: I1212 17:34:46.982650 2312 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:34:46.985332 kubelet[2312]: I1212 17:34:46.985309 2312 kubelet.go:480] "Attempting to sync node with API server" Dec 12 17:34:46.985374 kubelet[2312]: I1212 17:34:46.985337 2312 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:34:46.985416 kubelet[2312]: I1212 17:34:46.985393 2312 kubelet.go:386] "Adding apiserver pod source" Dec 12 17:34:46.985416 kubelet[2312]: I1212 17:34:46.985405 2312 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:34:46.987797 kubelet[2312]: I1212 17:34:46.987412 2312 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:34:46.988395 kubelet[2312]: E1212 17:34:46.988356 2312 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:34:46.988561 kubelet[2312]: E1212 17:34:46.988538 2312 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:34:46.989071 kubelet[2312]: I1212 17:34:46.989034 2312 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:34:46.989196 kubelet[2312]: W1212 17:34:46.989173 2312 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:34:46.993936 kubelet[2312]: I1212 17:34:46.993911 2312 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:34:46.994012 kubelet[2312]: I1212 17:34:46.993961 2312 server.go:1289] "Started kubelet" Dec 12 17:34:46.994111 kubelet[2312]: I1212 17:34:46.994079 2312 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:34:46.996096 kubelet[2312]: I1212 17:34:46.996027 2312 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:34:46.996412 kubelet[2312]: I1212 17:34:46.996386 2312 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:34:46.997203 kubelet[2312]: I1212 17:34:46.997178 2312 server.go:317] "Adding debug handlers to kubelet server" Dec 12 17:34:46.998273 kubelet[2312]: I1212 17:34:46.998243 2312 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:34:47.000681 kubelet[2312]: I1212 17:34:47.000655 2312 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:34:47.004228 kubelet[2312]: I1212 17:34:47.003425 2312 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:34:47.004228 kubelet[2312]: I1212 17:34:47.003534 2312 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:34:47.004228 kubelet[2312]: I1212 17:34:47.003571 2312 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:34:47.004228 kubelet[2312]: E1212 17:34:47.003920 2312 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:34:47.004228 kubelet[2312]: E1212 17:34:47.004009 2312 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:34:47.004228 kubelet[2312]: E1212 17:34:46.999254 2312 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.71:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.71:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18808845ba98c9bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 17:34:46.993930687 +0000 UTC m=+1.063721895,LastTimestamp:2025-12-12 17:34:46.993930687 +0000 UTC m=+1.063721895,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 17:34:47.004568 kubelet[2312]: E1212 17:34:47.004540 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:34:47.004765 kubelet[2312]: I1212 17:34:47.004746 2312 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:34:47.005018 kubelet[2312]: I1212 17:34:47.004999 2312 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:34:47.005709 kubelet[2312]: E1212 17:34:47.005678 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="200ms" Dec 12 17:34:47.005984 kubelet[2312]: I1212 17:34:47.005965 2312 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:34:47.008805 kubelet[2312]: I1212 17:34:47.007207 2312 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 17:34:47.016737 kubelet[2312]: I1212 17:34:47.016514 2312 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:34:47.016737 kubelet[2312]: I1212 17:34:47.016529 2312 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:34:47.016737 kubelet[2312]: I1212 17:34:47.016546 2312 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:34:47.021860 kubelet[2312]: I1212 17:34:47.021831 2312 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 17:34:47.021938 kubelet[2312]: I1212 17:34:47.021866 2312 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 17:34:47.021938 kubelet[2312]: I1212 17:34:47.021890 2312 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:34:47.021938 kubelet[2312]: I1212 17:34:47.021900 2312 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 17:34:47.021991 kubelet[2312]: E1212 17:34:47.021939 2312 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:34:47.104978 kubelet[2312]: E1212 17:34:47.104930 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:34:47.106725 kubelet[2312]: I1212 17:34:47.106451 2312 policy_none.go:49] "None policy: Start" Dec 12 17:34:47.106725 kubelet[2312]: I1212 17:34:47.106478 2312 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:34:47.106725 kubelet[2312]: I1212 17:34:47.106490 2312 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:34:47.106866 kubelet[2312]: E1212 17:34:47.106836 2312 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:34:47.112490 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:34:47.122133 kubelet[2312]: E1212 17:34:47.122112 2312 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 17:34:47.123816 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:34:47.144073 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:34:47.145817 kubelet[2312]: E1212 17:34:47.145673 2312 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:34:47.145946 kubelet[2312]: I1212 17:34:47.145908 2312 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:34:47.145983 kubelet[2312]: I1212 17:34:47.145924 2312 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:34:47.146332 kubelet[2312]: I1212 17:34:47.146152 2312 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:34:47.147029 kubelet[2312]: E1212 17:34:47.146860 2312 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:34:47.147029 kubelet[2312]: E1212 17:34:47.146900 2312 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 12 17:34:47.206537 kubelet[2312]: E1212 17:34:47.206428 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="400ms" Dec 12 17:34:47.247824 kubelet[2312]: I1212 17:34:47.247644 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:34:47.248136 kubelet[2312]: E1212 17:34:47.248101 2312 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Dec 12 17:34:47.331821 systemd[1]: Created slice kubepods-burstable-podc00eea8c23b1e8846ac725d648481674.slice - libcontainer container kubepods-burstable-podc00eea8c23b1e8846ac725d648481674.slice. Dec 12 17:34:47.359367 kubelet[2312]: E1212 17:34:47.359336 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:34:47.363081 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 12 17:34:47.382954 kubelet[2312]: E1212 17:34:47.382921 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:34:47.387991 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 12 17:34:47.390562 kubelet[2312]: E1212 17:34:47.390536 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:34:47.406244 kubelet[2312]: I1212 17:34:47.406168 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:47.406244 kubelet[2312]: I1212 17:34:47.406206 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:47.406244 kubelet[2312]: I1212 17:34:47.406227 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:34:47.406244 kubelet[2312]: I1212 17:34:47.406245 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c00eea8c23b1e8846ac725d648481674-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c00eea8c23b1e8846ac725d648481674\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:47.406244 kubelet[2312]: I1212 17:34:47.406263 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:47.406528 kubelet[2312]: I1212 17:34:47.406278 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:47.406528 kubelet[2312]: I1212 17:34:47.406292 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c00eea8c23b1e8846ac725d648481674-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c00eea8c23b1e8846ac725d648481674\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:47.406528 kubelet[2312]: I1212 17:34:47.406305 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c00eea8c23b1e8846ac725d648481674-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c00eea8c23b1e8846ac725d648481674\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:47.406528 kubelet[2312]: I1212 17:34:47.406330 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:47.449707 kubelet[2312]: I1212 17:34:47.449628 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:34:47.450102 kubelet[2312]: E1212 17:34:47.450071 2312 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Dec 12 17:34:47.607200 kubelet[2312]: E1212 17:34:47.607159 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="800ms" Dec 12 17:34:47.661064 containerd[1530]: time="2025-12-12T17:34:47.661022597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c00eea8c23b1e8846ac725d648481674,Namespace:kube-system,Attempt:0,}" Dec 12 17:34:47.691554 containerd[1530]: time="2025-12-12T17:34:47.691490578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 12 17:34:47.691734 containerd[1530]: time="2025-12-12T17:34:47.691640997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 12 17:34:47.720043 containerd[1530]: time="2025-12-12T17:34:47.719460649Z" level=info msg="connecting to shim e1e5fe6142a857afcee6ff597def16a4eb044390cce226d06551479878f18c3e" address="unix:///run/containerd/s/eb3fe461b76096b1addcacf7bb85d4f034782e4e82d7d4e50812c342a982ac40" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:34:47.742990 containerd[1530]: time="2025-12-12T17:34:47.742860292Z" level=info msg="connecting to shim 6890fc5063673fc597b924f44bf6f605ca187521d6a3f8cf8ccb061cf55c763f" address="unix:///run/containerd/s/0ed722408fff9be77ee78e6da1fc4ffb31f58fd79395d05db76feb43f9a80679" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:34:47.753581 containerd[1530]: time="2025-12-12T17:34:47.750282557Z" level=info msg="connecting to shim a80aed58b68a957f5f670b37ac7104482ca07c7265482d7f1bef04eece79533f" address="unix:///run/containerd/s/6e17cffdea4da81ca1def0a15021ccdc5cdbdb3ccbe3e86fa449348e12a7bba8" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:34:47.762032 systemd[1]: Started cri-containerd-e1e5fe6142a857afcee6ff597def16a4eb044390cce226d06551479878f18c3e.scope - libcontainer container e1e5fe6142a857afcee6ff597def16a4eb044390cce226d06551479878f18c3e. Dec 12 17:34:47.767701 systemd[1]: Started cri-containerd-6890fc5063673fc597b924f44bf6f605ca187521d6a3f8cf8ccb061cf55c763f.scope - libcontainer container 6890fc5063673fc597b924f44bf6f605ca187521d6a3f8cf8ccb061cf55c763f. Dec 12 17:34:47.787048 systemd[1]: Started cri-containerd-a80aed58b68a957f5f670b37ac7104482ca07c7265482d7f1bef04eece79533f.scope - libcontainer container a80aed58b68a957f5f670b37ac7104482ca07c7265482d7f1bef04eece79533f. Dec 12 17:34:47.825102 containerd[1530]: time="2025-12-12T17:34:47.825046029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c00eea8c23b1e8846ac725d648481674,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1e5fe6142a857afcee6ff597def16a4eb044390cce226d06551479878f18c3e\"" Dec 12 17:34:47.833890 containerd[1530]: time="2025-12-12T17:34:47.833767336Z" level=info msg="CreateContainer within sandbox \"e1e5fe6142a857afcee6ff597def16a4eb044390cce226d06551479878f18c3e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:34:47.838237 containerd[1530]: time="2025-12-12T17:34:47.838162682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6890fc5063673fc597b924f44bf6f605ca187521d6a3f8cf8ccb061cf55c763f\"" Dec 12 17:34:47.847442 containerd[1530]: time="2025-12-12T17:34:47.847399986Z" level=info msg="CreateContainer within sandbox \"6890fc5063673fc597b924f44bf6f605ca187521d6a3f8cf8ccb061cf55c763f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:34:47.847756 containerd[1530]: time="2025-12-12T17:34:47.847725166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a80aed58b68a957f5f670b37ac7104482ca07c7265482d7f1bef04eece79533f\"" Dec 12 17:34:47.852004 containerd[1530]: time="2025-12-12T17:34:47.851071702Z" level=info msg="Container aa6b0f64b66efd1f0a66cf0187aad5f6738aefb2fa70b11133e2a02ca3042b55: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:34:47.852113 kubelet[2312]: I1212 17:34:47.851214 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:34:47.852113 kubelet[2312]: E1212 17:34:47.851682 2312 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Dec 12 17:34:47.854470 containerd[1530]: time="2025-12-12T17:34:47.854435773Z" level=info msg="CreateContainer within sandbox \"a80aed58b68a957f5f670b37ac7104482ca07c7265482d7f1bef04eece79533f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:34:47.878210 containerd[1530]: time="2025-12-12T17:34:47.878099061Z" level=info msg="CreateContainer within sandbox \"e1e5fe6142a857afcee6ff597def16a4eb044390cce226d06551479878f18c3e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aa6b0f64b66efd1f0a66cf0187aad5f6738aefb2fa70b11133e2a02ca3042b55\"" Dec 12 17:34:47.879059 containerd[1530]: time="2025-12-12T17:34:47.879027840Z" level=info msg="StartContainer for \"aa6b0f64b66efd1f0a66cf0187aad5f6738aefb2fa70b11133e2a02ca3042b55\"" Dec 12 17:34:47.880211 containerd[1530]: time="2025-12-12T17:34:47.880163410Z" level=info msg="connecting to shim aa6b0f64b66efd1f0a66cf0187aad5f6738aefb2fa70b11133e2a02ca3042b55" address="unix:///run/containerd/s/eb3fe461b76096b1addcacf7bb85d4f034782e4e82d7d4e50812c342a982ac40" protocol=ttrpc version=3 Dec 12 17:34:47.900037 systemd[1]: Started cri-containerd-aa6b0f64b66efd1f0a66cf0187aad5f6738aefb2fa70b11133e2a02ca3042b55.scope - libcontainer container aa6b0f64b66efd1f0a66cf0187aad5f6738aefb2fa70b11133e2a02ca3042b55. Dec 12 17:34:47.901632 containerd[1530]: time="2025-12-12T17:34:47.901554996Z" level=info msg="Container e3ea769f1fee4f5bfd737b42f875b790d9888669fde4c930a0ba45fedfebe4a9: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:34:47.936644 containerd[1530]: time="2025-12-12T17:34:47.936597729Z" level=info msg="StartContainer for \"aa6b0f64b66efd1f0a66cf0187aad5f6738aefb2fa70b11133e2a02ca3042b55\" returns successfully" Dec 12 17:34:47.941868 containerd[1530]: time="2025-12-12T17:34:47.941337993Z" level=info msg="Container e64bc31cbd4b487bd67e42c0b5cbe2c47304b31630b396f268b36a368330e398: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:34:47.941868 containerd[1530]: time="2025-12-12T17:34:47.941711018Z" level=info msg="CreateContainer within sandbox \"6890fc5063673fc597b924f44bf6f605ca187521d6a3f8cf8ccb061cf55c763f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e3ea769f1fee4f5bfd737b42f875b790d9888669fde4c930a0ba45fedfebe4a9\"" Dec 12 17:34:47.942483 containerd[1530]: time="2025-12-12T17:34:47.942449501Z" level=info msg="StartContainer for \"e3ea769f1fee4f5bfd737b42f875b790d9888669fde4c930a0ba45fedfebe4a9\"" Dec 12 17:34:47.947264 containerd[1530]: time="2025-12-12T17:34:47.947218312Z" level=info msg="connecting to shim e3ea769f1fee4f5bfd737b42f875b790d9888669fde4c930a0ba45fedfebe4a9" address="unix:///run/containerd/s/0ed722408fff9be77ee78e6da1fc4ffb31f58fd79395d05db76feb43f9a80679" protocol=ttrpc version=3 Dec 12 17:34:47.959040 containerd[1530]: time="2025-12-12T17:34:47.958992322Z" level=info msg="CreateContainer within sandbox \"a80aed58b68a957f5f670b37ac7104482ca07c7265482d7f1bef04eece79533f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e64bc31cbd4b487bd67e42c0b5cbe2c47304b31630b396f268b36a368330e398\"" Dec 12 17:34:47.960151 containerd[1530]: time="2025-12-12T17:34:47.960108595Z" level=info msg="StartContainer for \"e64bc31cbd4b487bd67e42c0b5cbe2c47304b31630b396f268b36a368330e398\"" Dec 12 17:34:47.960763 kubelet[2312]: E1212 17:34:47.960730 2312 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:34:47.962120 containerd[1530]: time="2025-12-12T17:34:47.962066245Z" level=info msg="connecting to shim e64bc31cbd4b487bd67e42c0b5cbe2c47304b31630b396f268b36a368330e398" address="unix:///run/containerd/s/6e17cffdea4da81ca1def0a15021ccdc5cdbdb3ccbe3e86fa449348e12a7bba8" protocol=ttrpc version=3 Dec 12 17:34:47.973107 systemd[1]: Started cri-containerd-e3ea769f1fee4f5bfd737b42f875b790d9888669fde4c930a0ba45fedfebe4a9.scope - libcontainer container e3ea769f1fee4f5bfd737b42f875b790d9888669fde4c930a0ba45fedfebe4a9. Dec 12 17:34:47.985037 systemd[1]: Started cri-containerd-e64bc31cbd4b487bd67e42c0b5cbe2c47304b31630b396f268b36a368330e398.scope - libcontainer container e64bc31cbd4b487bd67e42c0b5cbe2c47304b31630b396f268b36a368330e398. Dec 12 17:34:48.024775 containerd[1530]: time="2025-12-12T17:34:48.024732322Z" level=info msg="StartContainer for \"e3ea769f1fee4f5bfd737b42f875b790d9888669fde4c930a0ba45fedfebe4a9\" returns successfully" Dec 12 17:34:48.034751 kubelet[2312]: E1212 17:34:48.034331 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:34:48.039805 kubelet[2312]: E1212 17:34:48.039183 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:34:48.062854 containerd[1530]: time="2025-12-12T17:34:48.062765947Z" level=info msg="StartContainer for \"e64bc31cbd4b487bd67e42c0b5cbe2c47304b31630b396f268b36a368330e398\" returns successfully" Dec 12 17:34:48.653992 kubelet[2312]: I1212 17:34:48.653958 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:34:49.042150 kubelet[2312]: E1212 17:34:49.042109 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:34:49.043156 kubelet[2312]: E1212 17:34:49.042500 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:34:49.893821 kubelet[2312]: E1212 17:34:49.890288 2312 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 12 17:34:49.986899 kubelet[2312]: I1212 17:34:49.986856 2312 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:34:49.986899 kubelet[2312]: E1212 17:34:49.986901 2312 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 12 17:34:49.988071 kubelet[2312]: I1212 17:34:49.988042 2312 apiserver.go:52] "Watching apiserver" Dec 12 17:34:50.003887 kubelet[2312]: I1212 17:34:50.003644 2312 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:34:50.005220 kubelet[2312]: I1212 17:34:50.005191 2312 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:34:50.028172 kubelet[2312]: E1212 17:34:50.028135 2312 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 17:34:50.028717 kubelet[2312]: I1212 17:34:50.028466 2312 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:50.033213 kubelet[2312]: E1212 17:34:50.032976 2312 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:50.033336 kubelet[2312]: I1212 17:34:50.033321 2312 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:50.036596 kubelet[2312]: E1212 17:34:50.036575 2312 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:50.043414 kubelet[2312]: I1212 17:34:50.043268 2312 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:34:50.046376 kubelet[2312]: E1212 17:34:50.046344 2312 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 17:34:51.046190 kubelet[2312]: I1212 17:34:51.046155 2312 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:34:51.859211 systemd[1]: Reload requested from client PID 2600 ('systemctl') (unit session-7.scope)... Dec 12 17:34:51.859224 systemd[1]: Reloading... Dec 12 17:34:51.918806 zram_generator::config[2643]: No configuration found. Dec 12 17:34:52.144666 systemd[1]: Reloading finished in 285 ms. Dec 12 17:34:52.174145 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:34:52.187759 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:34:52.188053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:34:52.188114 systemd[1]: kubelet.service: Consumed 1.457s CPU time, 128.4M memory peak. Dec 12 17:34:52.189848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:34:52.355875 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:34:52.373415 (kubelet)[2685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:34:52.416828 kubelet[2685]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:34:52.417153 kubelet[2685]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:34:52.417808 kubelet[2685]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:34:52.417808 kubelet[2685]: I1212 17:34:52.417346 2685 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:34:52.425078 kubelet[2685]: I1212 17:34:52.425030 2685 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 17:34:52.425078 kubelet[2685]: I1212 17:34:52.425061 2685 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:34:52.425280 kubelet[2685]: I1212 17:34:52.425264 2685 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:34:52.427363 kubelet[2685]: I1212 17:34:52.426780 2685 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 17:34:52.429983 kubelet[2685]: I1212 17:34:52.429952 2685 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:34:52.437992 kubelet[2685]: I1212 17:34:52.437962 2685 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:34:52.441518 kubelet[2685]: I1212 17:34:52.441474 2685 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:34:52.441690 kubelet[2685]: I1212 17:34:52.441664 2685 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:34:52.441867 kubelet[2685]: I1212 17:34:52.441689 2685 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:34:52.441954 kubelet[2685]: I1212 17:34:52.441877 2685 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:34:52.441954 kubelet[2685]: I1212 17:34:52.441886 2685 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 17:34:52.441954 kubelet[2685]: I1212 17:34:52.441925 2685 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:34:52.442075 kubelet[2685]: I1212 17:34:52.442063 2685 kubelet.go:480] "Attempting to sync node with API server" Dec 12 17:34:52.442104 kubelet[2685]: I1212 17:34:52.442081 2685 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:34:52.442104 kubelet[2685]: I1212 17:34:52.442103 2685 kubelet.go:386] "Adding apiserver pod source" Dec 12 17:34:52.442153 kubelet[2685]: I1212 17:34:52.442115 2685 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:34:52.443200 kubelet[2685]: I1212 17:34:52.443043 2685 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:34:52.446354 kubelet[2685]: I1212 17:34:52.446321 2685 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:34:52.448198 kubelet[2685]: I1212 17:34:52.448167 2685 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:34:52.448267 kubelet[2685]: I1212 17:34:52.448217 2685 server.go:1289] "Started kubelet" Dec 12 17:34:52.448289 kubelet[2685]: I1212 17:34:52.448271 2685 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:34:52.448701 kubelet[2685]: I1212 17:34:52.448646 2685 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:34:52.449124 kubelet[2685]: I1212 17:34:52.449088 2685 server.go:317] "Adding debug handlers to kubelet server" Dec 12 17:34:52.449277 kubelet[2685]: I1212 17:34:52.449259 2685 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:34:52.452795 kubelet[2685]: I1212 17:34:52.450865 2685 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:34:52.452795 kubelet[2685]: I1212 17:34:52.452063 2685 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:34:52.452795 kubelet[2685]: I1212 17:34:52.452172 2685 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:34:52.454927 kubelet[2685]: I1212 17:34:52.454800 2685 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:34:52.454927 kubelet[2685]: I1212 17:34:52.454920 2685 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:34:52.456699 kubelet[2685]: E1212 17:34:52.456673 2685 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:34:52.457445 kubelet[2685]: E1212 17:34:52.457417 2685 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:34:52.462640 kubelet[2685]: I1212 17:34:52.462607 2685 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:34:52.468445 kubelet[2685]: I1212 17:34:52.468298 2685 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:34:52.468445 kubelet[2685]: I1212 17:34:52.468318 2685 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:34:52.475242 kubelet[2685]: I1212 17:34:52.474923 2685 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 17:34:52.477150 kubelet[2685]: I1212 17:34:52.476870 2685 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 17:34:52.477150 kubelet[2685]: I1212 17:34:52.476894 2685 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 17:34:52.477150 kubelet[2685]: I1212 17:34:52.476914 2685 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:34:52.477150 kubelet[2685]: I1212 17:34:52.476921 2685 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 17:34:52.477150 kubelet[2685]: E1212 17:34:52.476974 2685 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:34:52.509149 kubelet[2685]: I1212 17:34:52.509126 2685 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:34:52.509149 kubelet[2685]: I1212 17:34:52.509139 2685 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:34:52.509287 kubelet[2685]: I1212 17:34:52.509180 2685 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:34:52.509352 kubelet[2685]: I1212 17:34:52.509332 2685 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:34:52.509384 kubelet[2685]: I1212 17:34:52.509349 2685 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:34:52.509384 kubelet[2685]: I1212 17:34:52.509367 2685 policy_none.go:49] "None policy: Start" Dec 12 17:34:52.509384 kubelet[2685]: I1212 17:34:52.509375 2685 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:34:52.509384 kubelet[2685]: I1212 17:34:52.509384 2685 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:34:52.509473 kubelet[2685]: I1212 17:34:52.509464 2685 state_mem.go:75] "Updated machine memory state" Dec 12 17:34:52.513164 kubelet[2685]: E1212 17:34:52.513132 2685 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:34:52.513331 kubelet[2685]: I1212 17:34:52.513302 2685 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:34:52.513375 kubelet[2685]: I1212 17:34:52.513324 2685 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:34:52.513531 kubelet[2685]: I1212 17:34:52.513514 2685 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:34:52.514864 kubelet[2685]: E1212 17:34:52.514240 2685 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:34:52.578131 kubelet[2685]: I1212 17:34:52.578076 2685 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:52.578321 kubelet[2685]: I1212 17:34:52.578201 2685 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:52.578321 kubelet[2685]: I1212 17:34:52.578247 2685 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:34:52.587017 kubelet[2685]: E1212 17:34:52.586967 2685 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 12 17:34:52.615411 kubelet[2685]: I1212 17:34:52.615377 2685 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:34:52.623105 kubelet[2685]: I1212 17:34:52.623072 2685 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 12 17:34:52.623220 kubelet[2685]: I1212 17:34:52.623161 2685 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:34:52.657201 kubelet[2685]: I1212 17:34:52.657129 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:52.657201 kubelet[2685]: I1212 17:34:52.657179 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c00eea8c23b1e8846ac725d648481674-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c00eea8c23b1e8846ac725d648481674\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:52.657357 kubelet[2685]: I1212 17:34:52.657215 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:52.657357 kubelet[2685]: I1212 17:34:52.657248 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:52.657357 kubelet[2685]: I1212 17:34:52.657267 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:52.657357 kubelet[2685]: I1212 17:34:52.657280 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:34:52.657357 kubelet[2685]: I1212 17:34:52.657296 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c00eea8c23b1e8846ac725d648481674-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c00eea8c23b1e8846ac725d648481674\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:52.657468 kubelet[2685]: I1212 17:34:52.657310 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c00eea8c23b1e8846ac725d648481674-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c00eea8c23b1e8846ac725d648481674\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:52.657468 kubelet[2685]: I1212 17:34:52.657347 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:52.860623 sudo[2724]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 12 17:34:52.860962 sudo[2724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 12 17:34:53.206748 sudo[2724]: pam_unix(sudo:session): session closed for user root Dec 12 17:34:53.443168 kubelet[2685]: I1212 17:34:53.443116 2685 apiserver.go:52] "Watching apiserver" Dec 12 17:34:53.454970 kubelet[2685]: I1212 17:34:53.454933 2685 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:34:53.498843 kubelet[2685]: I1212 17:34:53.495849 2685 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:53.498843 kubelet[2685]: I1212 17:34:53.496187 2685 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:53.498843 kubelet[2685]: I1212 17:34:53.496428 2685 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:34:53.505110 kubelet[2685]: E1212 17:34:53.504622 2685 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 12 17:34:53.506083 kubelet[2685]: E1212 17:34:53.505382 2685 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 17:34:53.506373 kubelet[2685]: E1212 17:34:53.506264 2685 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:34:53.526417 kubelet[2685]: I1212 17:34:53.526351 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.526328698 podStartE2EDuration="1.526328698s" podCreationTimestamp="2025-12-12 17:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:34:53.525716935 +0000 UTC m=+1.148238343" watchObservedRunningTime="2025-12-12 17:34:53.526328698 +0000 UTC m=+1.148850106" Dec 12 17:34:53.526690 kubelet[2685]: I1212 17:34:53.526664 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.526655329 podStartE2EDuration="2.526655329s" podCreationTimestamp="2025-12-12 17:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:34:53.51777562 +0000 UTC m=+1.140296988" watchObservedRunningTime="2025-12-12 17:34:53.526655329 +0000 UTC m=+1.149176737" Dec 12 17:34:55.296864 sudo[1740]: pam_unix(sudo:session): session closed for user root Dec 12 17:34:55.298738 sshd[1739]: Connection closed by 10.0.0.1 port 34708 Dec 12 17:34:55.299344 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Dec 12 17:34:55.305328 systemd[1]: sshd@6-10.0.0.71:22-10.0.0.1:34708.service: Deactivated successfully. Dec 12 17:34:55.307326 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:34:55.307592 systemd[1]: session-7.scope: Consumed 8.315s CPU time, 252.6M memory peak. Dec 12 17:34:55.308598 systemd-logind[1514]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:34:55.314538 systemd-logind[1514]: Removed session 7. Dec 12 17:34:57.104089 kubelet[2685]: I1212 17:34:57.104058 2685 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:34:57.104564 containerd[1530]: time="2025-12-12T17:34:57.104411835Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:34:57.105765 kubelet[2685]: I1212 17:34:57.105020 2685 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:34:57.757962 kubelet[2685]: I1212 17:34:57.757894 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.757874204 podStartE2EDuration="5.757874204s" podCreationTimestamp="2025-12-12 17:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:34:53.533368196 +0000 UTC m=+1.155889604" watchObservedRunningTime="2025-12-12 17:34:57.757874204 +0000 UTC m=+5.380395612" Dec 12 17:34:57.817875 systemd[1]: Created slice kubepods-besteffort-pod81cd212b_34e5_41e9_9dd7_01067bccfcbe.slice - libcontainer container kubepods-besteffort-pod81cd212b_34e5_41e9_9dd7_01067bccfcbe.slice. Dec 12 17:34:57.839164 systemd[1]: Created slice kubepods-burstable-pod4926d2fa_68cb_4044_b7b0_8dbc13b33cde.slice - libcontainer container kubepods-burstable-pod4926d2fa_68cb_4044_b7b0_8dbc13b33cde.slice. Dec 12 17:34:57.893231 kubelet[2685]: I1212 17:34:57.893184 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2rp6\" (UniqueName: \"kubernetes.io/projected/81cd212b-34e5-41e9-9dd7-01067bccfcbe-kube-api-access-q2rp6\") pod \"kube-proxy-l274g\" (UID: \"81cd212b-34e5-41e9-9dd7-01067bccfcbe\") " pod="kube-system/kube-proxy-l274g" Dec 12 17:34:57.893231 kubelet[2685]: I1212 17:34:57.893228 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-hostproc\") pod \"cilium-gcm6b\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " pod="kube-system/cilium-gcm6b" Dec 12 17:34:57.893395 kubelet[2685]: I1212 17:34:57.893251 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-lib-modules\") pod \"cilium-gcm6b\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " pod="kube-system/cilium-gcm6b" Dec 12 17:34:57.893395 kubelet[2685]: I1212 17:34:57.893267 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-host-proc-sys-kernel\") pod \"cilium-gcm6b\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " pod="kube-system/cilium-gcm6b" Dec 12 17:34:57.893395 kubelet[2685]: I1212 17:34:57.893283 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pvpr\" (UniqueName: \"kubernetes.io/projected/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-kube-api-access-6pvpr\") pod \"cilium-gcm6b\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " pod="kube-system/cilium-gcm6b" Dec 12 17:34:57.893395 kubelet[2685]: I1212 17:34:57.893300 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/81cd212b-34e5-41e9-9dd7-01067bccfcbe-kube-proxy\") pod \"kube-proxy-l274g\" (UID: \"81cd212b-34e5-41e9-9dd7-01067bccfcbe\") " pod="kube-system/kube-proxy-l274g" Dec 12 17:34:57.893395 kubelet[2685]: I1212 17:34:57.893313 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-cilium-run\") pod \"cilium-gcm6b\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " pod="kube-system/cilium-gcm6b" Dec 12 17:34:57.893395 kubelet[2685]: I1212 17:34:57.893326 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-bpf-maps\") pod \"cilium-gcm6b\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " pod="kube-system/cilium-gcm6b" Dec 12 17:34:57.893519 kubelet[2685]: I1212 17:34:57.893339 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-cilium-cgroup\") pod \"cilium-gcm6b\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " pod="kube-system/cilium-gcm6b" Dec 12 17:34:57.893519 kubelet[2685]: I1212 17:34:57.893353 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-etc-cni-netd\") pod \"cilium-gcm6b\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " pod="kube-system/cilium-gcm6b" Dec 12 17:34:57.893519 kubelet[2685]: I1212 17:34:57.893368 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-cni-path\") pod \"cilium-gcm6b\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " pod="kube-system/cilium-gcm6b" Dec 12 17:34:57.893519 kubelet[2685]: I1212 17:34:57.893382 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-hubble-tls\") pod \"cilium-gcm6b\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " pod="kube-system/cilium-gcm6b" Dec 12 17:34:57.893519 kubelet[2685]: I1212 17:34:57.893395 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81cd212b-34e5-41e9-9dd7-01067bccfcbe-xtables-lock\") pod \"kube-proxy-l274g\" (UID: \"81cd212b-34e5-41e9-9dd7-01067bccfcbe\") " pod="kube-system/kube-proxy-l274g" Dec 12 17:34:57.893519 kubelet[2685]: I1212 17:34:57.893409 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81cd212b-34e5-41e9-9dd7-01067bccfcbe-lib-modules\") pod \"kube-proxy-l274g\" (UID: \"81cd212b-34e5-41e9-9dd7-01067bccfcbe\") " pod="kube-system/kube-proxy-l274g" Dec 12 17:34:57.893636 kubelet[2685]: I1212 17:34:57.893424 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-xtables-lock\") pod \"cilium-gcm6b\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " pod="kube-system/cilium-gcm6b" Dec 12 17:34:57.893636 kubelet[2685]: I1212 17:34:57.893438 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-clustermesh-secrets\") pod \"cilium-gcm6b\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " pod="kube-system/cilium-gcm6b" Dec 12 17:34:57.893636 kubelet[2685]: I1212 17:34:57.893451 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-cilium-config-path\") pod \"cilium-gcm6b\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " pod="kube-system/cilium-gcm6b" Dec 12 17:34:57.893636 kubelet[2685]: I1212 17:34:57.893467 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-host-proc-sys-net\") pod \"cilium-gcm6b\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " pod="kube-system/cilium-gcm6b" Dec 12 17:34:58.135139 containerd[1530]: time="2025-12-12T17:34:58.135034120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l274g,Uid:81cd212b-34e5-41e9-9dd7-01067bccfcbe,Namespace:kube-system,Attempt:0,}" Dec 12 17:34:58.143351 containerd[1530]: time="2025-12-12T17:34:58.143088545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gcm6b,Uid:4926d2fa-68cb-4044-b7b0-8dbc13b33cde,Namespace:kube-system,Attempt:0,}" Dec 12 17:34:58.155007 containerd[1530]: time="2025-12-12T17:34:58.154965394Z" level=info msg="connecting to shim 9202a84dcdacb64f0fc84dd806f016a85eabcdb77de2a2d66d24ffed265da8a3" address="unix:///run/containerd/s/68950c841e52bb919e020f524c7f7aa9f0636cd7efe73e45aa081bcf078cbae6" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:34:58.168599 containerd[1530]: time="2025-12-12T17:34:58.168210309Z" level=info msg="connecting to shim 3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38" address="unix:///run/containerd/s/212bc0710dd2f36445ea8dc3c9829e0cc5d3b62f1103760587cf9d8249238548" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:34:58.184988 systemd[1]: Started cri-containerd-9202a84dcdacb64f0fc84dd806f016a85eabcdb77de2a2d66d24ffed265da8a3.scope - libcontainer container 9202a84dcdacb64f0fc84dd806f016a85eabcdb77de2a2d66d24ffed265da8a3. Dec 12 17:34:58.191796 systemd[1]: Started cri-containerd-3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38.scope - libcontainer container 3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38. Dec 12 17:34:58.242663 systemd[1]: Created slice kubepods-besteffort-pode8632941_9458_4f24_aa28_61ef0efde20e.slice - libcontainer container kubepods-besteffort-pode8632941_9458_4f24_aa28_61ef0efde20e.slice. Dec 12 17:34:58.253253 containerd[1530]: time="2025-12-12T17:34:58.253191597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l274g,Uid:81cd212b-34e5-41e9-9dd7-01067bccfcbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"9202a84dcdacb64f0fc84dd806f016a85eabcdb77de2a2d66d24ffed265da8a3\"" Dec 12 17:34:58.261532 containerd[1530]: time="2025-12-12T17:34:58.261495588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gcm6b,Uid:4926d2fa-68cb-4044-b7b0-8dbc13b33cde,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38\"" Dec 12 17:34:58.264329 containerd[1530]: time="2025-12-12T17:34:58.264299063Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 12 17:34:58.269569 containerd[1530]: time="2025-12-12T17:34:58.269511800Z" level=info msg="CreateContainer within sandbox \"9202a84dcdacb64f0fc84dd806f016a85eabcdb77de2a2d66d24ffed265da8a3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:34:58.279816 containerd[1530]: time="2025-12-12T17:34:58.278900881Z" level=info msg="Container 73791e5c29c9dc1d6410c75a164720b00fcf982df009afa9dad5775bf83de6a4: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:34:58.287534 containerd[1530]: time="2025-12-12T17:34:58.287470762Z" level=info msg="CreateContainer within sandbox \"9202a84dcdacb64f0fc84dd806f016a85eabcdb77de2a2d66d24ffed265da8a3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"73791e5c29c9dc1d6410c75a164720b00fcf982df009afa9dad5775bf83de6a4\"" Dec 12 17:34:58.288147 containerd[1530]: time="2025-12-12T17:34:58.288098656Z" level=info msg="StartContainer for \"73791e5c29c9dc1d6410c75a164720b00fcf982df009afa9dad5775bf83de6a4\"" Dec 12 17:34:58.290401 containerd[1530]: time="2025-12-12T17:34:58.290360667Z" level=info msg="connecting to shim 73791e5c29c9dc1d6410c75a164720b00fcf982df009afa9dad5775bf83de6a4" address="unix:///run/containerd/s/68950c841e52bb919e020f524c7f7aa9f0636cd7efe73e45aa081bcf078cbae6" protocol=ttrpc version=3 Dec 12 17:34:58.296853 kubelet[2685]: I1212 17:34:58.296812 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54cc8\" (UniqueName: \"kubernetes.io/projected/e8632941-9458-4f24-aa28-61ef0efde20e-kube-api-access-54cc8\") pod \"cilium-operator-6c4d7847fc-qb5dr\" (UID: \"e8632941-9458-4f24-aa28-61ef0efde20e\") " pod="kube-system/cilium-operator-6c4d7847fc-qb5dr" Dec 12 17:34:58.297643 kubelet[2685]: I1212 17:34:58.297576 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8632941-9458-4f24-aa28-61ef0efde20e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-qb5dr\" (UID: \"e8632941-9458-4f24-aa28-61ef0efde20e\") " pod="kube-system/cilium-operator-6c4d7847fc-qb5dr" Dec 12 17:34:58.317010 systemd[1]: Started cri-containerd-73791e5c29c9dc1d6410c75a164720b00fcf982df009afa9dad5775bf83de6a4.scope - libcontainer container 73791e5c29c9dc1d6410c75a164720b00fcf982df009afa9dad5775bf83de6a4. Dec 12 17:34:58.375231 containerd[1530]: time="2025-12-12T17:34:58.375166696Z" level=info msg="StartContainer for \"73791e5c29c9dc1d6410c75a164720b00fcf982df009afa9dad5775bf83de6a4\" returns successfully" Dec 12 17:34:58.548211 containerd[1530]: time="2025-12-12T17:34:58.548169109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qb5dr,Uid:e8632941-9458-4f24-aa28-61ef0efde20e,Namespace:kube-system,Attempt:0,}" Dec 12 17:34:58.595416 containerd[1530]: time="2025-12-12T17:34:58.595346311Z" level=info msg="connecting to shim ff12e36a70ea373955971a466c1acac66a783261c94695c48eddc163de177961" address="unix:///run/containerd/s/46488f38069f71dc3152cd260b8d25ca6991a140def721cc1f1e498e65bee990" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:34:58.628008 systemd[1]: Started cri-containerd-ff12e36a70ea373955971a466c1acac66a783261c94695c48eddc163de177961.scope - libcontainer container ff12e36a70ea373955971a466c1acac66a783261c94695c48eddc163de177961. Dec 12 17:34:58.663659 containerd[1530]: time="2025-12-12T17:34:58.663595976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qb5dr,Uid:e8632941-9458-4f24-aa28-61ef0efde20e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff12e36a70ea373955971a466c1acac66a783261c94695c48eddc163de177961\"" Dec 12 17:35:01.733471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4134388826.mount: Deactivated successfully. Dec 12 17:35:03.755330 containerd[1530]: time="2025-12-12T17:35:03.755270640Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:03.756084 containerd[1530]: time="2025-12-12T17:35:03.756040560Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Dec 12 17:35:03.756775 containerd[1530]: time="2025-12-12T17:35:03.756740982Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:03.758666 containerd[1530]: time="2025-12-12T17:35:03.758625872Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.494289637s" Dec 12 17:35:03.758716 containerd[1530]: time="2025-12-12T17:35:03.758664603Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 12 17:35:03.760640 containerd[1530]: time="2025-12-12T17:35:03.760589623Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 12 17:35:03.767317 containerd[1530]: time="2025-12-12T17:35:03.767273281Z" level=info msg="CreateContainer within sandbox \"3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 17:35:03.783353 containerd[1530]: time="2025-12-12T17:35:03.782850010Z" level=info msg="Container 044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:03.789902 containerd[1530]: time="2025-12-12T17:35:03.789862433Z" level=info msg="CreateContainer within sandbox \"3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5\"" Dec 12 17:35:03.791734 containerd[1530]: time="2025-12-12T17:35:03.791706513Z" level=info msg="StartContainer for \"044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5\"" Dec 12 17:35:03.792658 containerd[1530]: time="2025-12-12T17:35:03.792625792Z" level=info msg="connecting to shim 044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5" address="unix:///run/containerd/s/212bc0710dd2f36445ea8dc3c9829e0cc5d3b62f1103760587cf9d8249238548" protocol=ttrpc version=3 Dec 12 17:35:03.831950 systemd[1]: Started cri-containerd-044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5.scope - libcontainer container 044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5. Dec 12 17:35:03.884550 containerd[1530]: time="2025-12-12T17:35:03.884503198Z" level=info msg="StartContainer for \"044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5\" returns successfully" Dec 12 17:35:03.893088 systemd[1]: cri-containerd-044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5.scope: Deactivated successfully. Dec 12 17:35:03.929324 containerd[1530]: time="2025-12-12T17:35:03.929270196Z" level=info msg="received container exit event container_id:\"044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5\" id:\"044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5\" pid:3115 exited_at:{seconds:1765560903 nanos:923692426}" Dec 12 17:35:04.551335 kubelet[2685]: I1212 17:35:04.551240 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l274g" podStartSLOduration=7.551223223 podStartE2EDuration="7.551223223s" podCreationTimestamp="2025-12-12 17:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:34:58.534951803 +0000 UTC m=+6.157473251" watchObservedRunningTime="2025-12-12 17:35:04.551223223 +0000 UTC m=+12.173744631" Dec 12 17:35:04.552303 containerd[1530]: time="2025-12-12T17:35:04.552269041Z" level=info msg="CreateContainer within sandbox \"3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 17:35:04.559460 containerd[1530]: time="2025-12-12T17:35:04.559167103Z" level=info msg="Container 37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:04.565767 containerd[1530]: time="2025-12-12T17:35:04.565722880Z" level=info msg="CreateContainer within sandbox \"3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5\"" Dec 12 17:35:04.566497 containerd[1530]: time="2025-12-12T17:35:04.566469544Z" level=info msg="StartContainer for \"37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5\"" Dec 12 17:35:04.567576 containerd[1530]: time="2025-12-12T17:35:04.567549290Z" level=info msg="connecting to shim 37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5" address="unix:///run/containerd/s/212bc0710dd2f36445ea8dc3c9829e0cc5d3b62f1103760587cf9d8249238548" protocol=ttrpc version=3 Dec 12 17:35:04.592029 systemd[1]: Started cri-containerd-37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5.scope - libcontainer container 37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5. Dec 12 17:35:04.621597 containerd[1530]: time="2025-12-12T17:35:04.621543091Z" level=info msg="StartContainer for \"37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5\" returns successfully" Dec 12 17:35:04.637993 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:35:04.638232 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:35:04.638507 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:35:04.640266 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:35:04.648717 systemd[1]: cri-containerd-37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5.scope: Deactivated successfully. Dec 12 17:35:04.649194 systemd[1]: cri-containerd-37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5.scope: Consumed 23ms CPU time, 4.5M memory peak, 2.3M written to disk. Dec 12 17:35:04.649908 containerd[1530]: time="2025-12-12T17:35:04.649873360Z" level=info msg="received container exit event container_id:\"37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5\" id:\"37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5\" pid:3161 exited_at:{seconds:1765560904 nanos:649204555}" Dec 12 17:35:04.680183 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:35:04.777256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5-rootfs.mount: Deactivated successfully. Dec 12 17:35:04.823041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963380816.mount: Deactivated successfully. Dec 12 17:35:05.495801 containerd[1530]: time="2025-12-12T17:35:05.495742159Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:05.497094 containerd[1530]: time="2025-12-12T17:35:05.497067430Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Dec 12 17:35:05.498127 containerd[1530]: time="2025-12-12T17:35:05.498082828Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:05.499629 containerd[1530]: time="2025-12-12T17:35:05.499360607Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.738739216s" Dec 12 17:35:05.499629 containerd[1530]: time="2025-12-12T17:35:05.499407218Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 12 17:35:05.504435 containerd[1530]: time="2025-12-12T17:35:05.504396907Z" level=info msg="CreateContainer within sandbox \"ff12e36a70ea373955971a466c1acac66a783261c94695c48eddc163de177961\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 12 17:35:05.511904 containerd[1530]: time="2025-12-12T17:35:05.511303004Z" level=info msg="Container 51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:05.529943 containerd[1530]: time="2025-12-12T17:35:05.529885357Z" level=info msg="CreateContainer within sandbox \"ff12e36a70ea373955971a466c1acac66a783261c94695c48eddc163de177961\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d\"" Dec 12 17:35:05.530501 containerd[1530]: time="2025-12-12T17:35:05.530338744Z" level=info msg="StartContainer for \"51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d\"" Dec 12 17:35:05.533744 containerd[1530]: time="2025-12-12T17:35:05.533671804Z" level=info msg="connecting to shim 51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d" address="unix:///run/containerd/s/46488f38069f71dc3152cd260b8d25ca6991a140def721cc1f1e498e65bee990" protocol=ttrpc version=3 Dec 12 17:35:05.538832 containerd[1530]: time="2025-12-12T17:35:05.538067514Z" level=info msg="CreateContainer within sandbox \"3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 17:35:05.548703 containerd[1530]: time="2025-12-12T17:35:05.548369647Z" level=info msg="Container af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:05.558985 systemd[1]: Started cri-containerd-51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d.scope - libcontainer container 51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d. Dec 12 17:35:05.575210 containerd[1530]: time="2025-12-12T17:35:05.575165404Z" level=info msg="CreateContainer within sandbox \"3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178\"" Dec 12 17:35:05.576003 containerd[1530]: time="2025-12-12T17:35:05.575973754Z" level=info msg="StartContainer for \"af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178\"" Dec 12 17:35:05.577367 containerd[1530]: time="2025-12-12T17:35:05.577338914Z" level=info msg="connecting to shim af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178" address="unix:///run/containerd/s/212bc0710dd2f36445ea8dc3c9829e0cc5d3b62f1103760587cf9d8249238548" protocol=ttrpc version=3 Dec 12 17:35:05.590401 containerd[1530]: time="2025-12-12T17:35:05.590366605Z" level=info msg="StartContainer for \"51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d\" returns successfully" Dec 12 17:35:05.597992 systemd[1]: Started cri-containerd-af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178.scope - libcontainer container af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178. Dec 12 17:35:05.675138 containerd[1530]: time="2025-12-12T17:35:05.675056964Z" level=info msg="StartContainer for \"af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178\" returns successfully" Dec 12 17:35:05.678396 systemd[1]: cri-containerd-af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178.scope: Deactivated successfully. Dec 12 17:35:05.682037 containerd[1530]: time="2025-12-12T17:35:05.681993029Z" level=info msg="received container exit event container_id:\"af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178\" id:\"af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178\" pid:3256 exited_at:{seconds:1765560905 nanos:681625023}" Dec 12 17:35:06.485555 update_engine[1516]: I20251212 17:35:06.485477 1516 update_attempter.cc:509] Updating boot flags... Dec 12 17:35:06.544278 containerd[1530]: time="2025-12-12T17:35:06.544123297Z" level=info msg="CreateContainer within sandbox \"3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 17:35:06.582088 kubelet[2685]: I1212 17:35:06.581944 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-qb5dr" podStartSLOduration=1.746958493 podStartE2EDuration="8.58192087s" podCreationTimestamp="2025-12-12 17:34:58 +0000 UTC" firstStartedPulling="2025-12-12 17:34:58.664969484 +0000 UTC m=+6.287490852" lastFinishedPulling="2025-12-12 17:35:05.499931821 +0000 UTC m=+13.122453229" observedRunningTime="2025-12-12 17:35:06.58030687 +0000 UTC m=+14.202828278" watchObservedRunningTime="2025-12-12 17:35:06.58192087 +0000 UTC m=+14.204442278" Dec 12 17:35:06.596499 containerd[1530]: time="2025-12-12T17:35:06.596441742Z" level=info msg="Container 2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:06.608802 containerd[1530]: time="2025-12-12T17:35:06.608737599Z" level=info msg="CreateContainer within sandbox \"3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638\"" Dec 12 17:35:06.611827 containerd[1530]: time="2025-12-12T17:35:06.610527437Z" level=info msg="StartContainer for \"2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638\"" Dec 12 17:35:06.611985 containerd[1530]: time="2025-12-12T17:35:06.611871336Z" level=info msg="connecting to shim 2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638" address="unix:///run/containerd/s/212bc0710dd2f36445ea8dc3c9829e0cc5d3b62f1103760587cf9d8249238548" protocol=ttrpc version=3 Dec 12 17:35:06.651994 systemd[1]: Started cri-containerd-2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638.scope - libcontainer container 2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638. Dec 12 17:35:06.676384 systemd[1]: cri-containerd-2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638.scope: Deactivated successfully. Dec 12 17:35:06.678828 containerd[1530]: time="2025-12-12T17:35:06.678767626Z" level=info msg="received container exit event container_id:\"2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638\" id:\"2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638\" pid:3318 exited_at:{seconds:1765560906 nanos:677661980}" Dec 12 17:35:06.687820 containerd[1530]: time="2025-12-12T17:35:06.685355132Z" level=info msg="StartContainer for \"2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638\" returns successfully" Dec 12 17:35:06.703706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638-rootfs.mount: Deactivated successfully. Dec 12 17:35:07.568812 containerd[1530]: time="2025-12-12T17:35:07.568607756Z" level=info msg="CreateContainer within sandbox \"3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 17:35:07.587323 containerd[1530]: time="2025-12-12T17:35:07.586555195Z" level=info msg="Container cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:07.593279 containerd[1530]: time="2025-12-12T17:35:07.593235929Z" level=info msg="CreateContainer within sandbox \"3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214\"" Dec 12 17:35:07.593755 containerd[1530]: time="2025-12-12T17:35:07.593727993Z" level=info msg="StartContainer for \"cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214\"" Dec 12 17:35:07.595125 containerd[1530]: time="2025-12-12T17:35:07.595083520Z" level=info msg="connecting to shim cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214" address="unix:///run/containerd/s/212bc0710dd2f36445ea8dc3c9829e0cc5d3b62f1103760587cf9d8249238548" protocol=ttrpc version=3 Dec 12 17:35:07.623148 systemd[1]: Started cri-containerd-cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214.scope - libcontainer container cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214. Dec 12 17:35:07.686051 containerd[1530]: time="2025-12-12T17:35:07.685985638Z" level=info msg="StartContainer for \"cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214\" returns successfully" Dec 12 17:35:07.831416 kubelet[2685]: I1212 17:35:07.831238 2685 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 17:35:07.936840 systemd[1]: Created slice kubepods-burstable-pod6e20cdb1_af5c_4dce_9669_154318c543b5.slice - libcontainer container kubepods-burstable-pod6e20cdb1_af5c_4dce_9669_154318c543b5.slice. Dec 12 17:35:07.947003 systemd[1]: Created slice kubepods-burstable-pod9341d994_2860_48fa_adce_40bd134de70b.slice - libcontainer container kubepods-burstable-pod9341d994_2860_48fa_adce_40bd134de70b.slice. Dec 12 17:35:08.073130 kubelet[2685]: I1212 17:35:08.072976 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9341d994-2860-48fa-adce-40bd134de70b-config-volume\") pod \"coredns-674b8bbfcf-kfhkg\" (UID: \"9341d994-2860-48fa-adce-40bd134de70b\") " pod="kube-system/coredns-674b8bbfcf-kfhkg" Dec 12 17:35:08.073130 kubelet[2685]: I1212 17:35:08.073025 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e20cdb1-af5c-4dce-9669-154318c543b5-config-volume\") pod \"coredns-674b8bbfcf-pxwfs\" (UID: \"6e20cdb1-af5c-4dce-9669-154318c543b5\") " pod="kube-system/coredns-674b8bbfcf-pxwfs" Dec 12 17:35:08.073130 kubelet[2685]: I1212 17:35:08.073045 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6v2j\" (UniqueName: \"kubernetes.io/projected/6e20cdb1-af5c-4dce-9669-154318c543b5-kube-api-access-p6v2j\") pod \"coredns-674b8bbfcf-pxwfs\" (UID: \"6e20cdb1-af5c-4dce-9669-154318c543b5\") " pod="kube-system/coredns-674b8bbfcf-pxwfs" Dec 12 17:35:08.073130 kubelet[2685]: I1212 17:35:08.073062 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcj9z\" (UniqueName: \"kubernetes.io/projected/9341d994-2860-48fa-adce-40bd134de70b-kube-api-access-gcj9z\") pod \"coredns-674b8bbfcf-kfhkg\" (UID: \"9341d994-2860-48fa-adce-40bd134de70b\") " pod="kube-system/coredns-674b8bbfcf-kfhkg" Dec 12 17:35:08.244706 containerd[1530]: time="2025-12-12T17:35:08.244578199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pxwfs,Uid:6e20cdb1-af5c-4dce-9669-154318c543b5,Namespace:kube-system,Attempt:0,}" Dec 12 17:35:08.251277 containerd[1530]: time="2025-12-12T17:35:08.250445140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kfhkg,Uid:9341d994-2860-48fa-adce-40bd134de70b,Namespace:kube-system,Attempt:0,}" Dec 12 17:35:09.798279 systemd-networkd[1433]: cilium_host: Link UP Dec 12 17:35:09.798453 systemd-networkd[1433]: cilium_net: Link UP Dec 12 17:35:09.798680 systemd-networkd[1433]: cilium_net: Gained carrier Dec 12 17:35:09.798886 systemd-networkd[1433]: cilium_host: Gained carrier Dec 12 17:35:09.899927 systemd-networkd[1433]: cilium_vxlan: Link UP Dec 12 17:35:09.899936 systemd-networkd[1433]: cilium_vxlan: Gained carrier Dec 12 17:35:10.203822 kernel: NET: Registered PF_ALG protocol family Dec 12 17:35:10.501062 systemd-networkd[1433]: cilium_net: Gained IPv6LL Dec 12 17:35:10.821002 systemd-networkd[1433]: cilium_host: Gained IPv6LL Dec 12 17:35:10.853745 systemd-networkd[1433]: lxc_health: Link UP Dec 12 17:35:10.854283 systemd-networkd[1433]: lxc_health: Gained carrier Dec 12 17:35:11.415819 kernel: eth0: renamed from tmpd7b73 Dec 12 17:35:11.418889 systemd-networkd[1433]: lxcd5bfcaa9afd4: Link UP Dec 12 17:35:11.419370 systemd-networkd[1433]: lxcd5bfcaa9afd4: Gained carrier Dec 12 17:35:11.419515 systemd-networkd[1433]: lxcb974fc0d714d: Link UP Dec 12 17:35:11.428558 kernel: eth0: renamed from tmpa9ff2 Dec 12 17:35:11.432722 systemd-networkd[1433]: lxcb974fc0d714d: Gained carrier Dec 12 17:35:11.587929 systemd-networkd[1433]: cilium_vxlan: Gained IPv6LL Dec 12 17:35:12.193580 kubelet[2685]: I1212 17:35:12.193499 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gcm6b" podStartSLOduration=9.69643382 podStartE2EDuration="15.19348201s" podCreationTimestamp="2025-12-12 17:34:57 +0000 UTC" firstStartedPulling="2025-12-12 17:34:58.263395235 +0000 UTC m=+5.885916643" lastFinishedPulling="2025-12-12 17:35:03.760443425 +0000 UTC m=+11.382964833" observedRunningTime="2025-12-12 17:35:08.572475832 +0000 UTC m=+16.194997240" watchObservedRunningTime="2025-12-12 17:35:12.19348201 +0000 UTC m=+19.816003378" Dec 12 17:35:12.291970 systemd-networkd[1433]: lxc_health: Gained IPv6LL Dec 12 17:35:13.188110 systemd-networkd[1433]: lxcd5bfcaa9afd4: Gained IPv6LL Dec 12 17:35:13.188374 systemd-networkd[1433]: lxcb974fc0d714d: Gained IPv6LL Dec 12 17:35:15.496365 containerd[1530]: time="2025-12-12T17:35:15.496315282Z" level=info msg="connecting to shim a9ff2567c2d074da6f685fce7376578118cffc320d5c3cad7412e4e5f64d5509" address="unix:///run/containerd/s/44abdc72eeb83e8ef05c641ba84c93acffc8d69585c8b7812471c437936526d4" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:35:15.498390 containerd[1530]: time="2025-12-12T17:35:15.498353219Z" level=info msg="connecting to shim d7b731f9ceb8963dd661fd5ef1565731a5909343f8bcc0f5b0e59b61735be962" address="unix:///run/containerd/s/d49ba94848015d71115b257bdd1f77b783aa5726455e2cf8c2b4cabb3a486474" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:35:15.521982 systemd[1]: Started cri-containerd-a9ff2567c2d074da6f685fce7376578118cffc320d5c3cad7412e4e5f64d5509.scope - libcontainer container a9ff2567c2d074da6f685fce7376578118cffc320d5c3cad7412e4e5f64d5509. Dec 12 17:35:15.525946 systemd[1]: Started cri-containerd-d7b731f9ceb8963dd661fd5ef1565731a5909343f8bcc0f5b0e59b61735be962.scope - libcontainer container d7b731f9ceb8963dd661fd5ef1565731a5909343f8bcc0f5b0e59b61735be962. Dec 12 17:35:15.541511 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:35:15.541639 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:35:15.573344 containerd[1530]: time="2025-12-12T17:35:15.573292799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kfhkg,Uid:9341d994-2860-48fa-adce-40bd134de70b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7b731f9ceb8963dd661fd5ef1565731a5909343f8bcc0f5b0e59b61735be962\"" Dec 12 17:35:15.576392 containerd[1530]: time="2025-12-12T17:35:15.576331241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pxwfs,Uid:6e20cdb1-af5c-4dce-9669-154318c543b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9ff2567c2d074da6f685fce7376578118cffc320d5c3cad7412e4e5f64d5509\"" Dec 12 17:35:15.577742 containerd[1530]: time="2025-12-12T17:35:15.577715202Z" level=info msg="CreateContainer within sandbox \"d7b731f9ceb8963dd661fd5ef1565731a5909343f8bcc0f5b0e59b61735be962\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:35:15.582146 containerd[1530]: time="2025-12-12T17:35:15.582104280Z" level=info msg="CreateContainer within sandbox \"a9ff2567c2d074da6f685fce7376578118cffc320d5c3cad7412e4e5f64d5509\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:35:15.588045 containerd[1530]: time="2025-12-12T17:35:15.587151094Z" level=info msg="Container ac1132091af4402db16a10377b92513f79717201c9ba834a9b5d428d84edd96f: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:15.592725 containerd[1530]: time="2025-12-12T17:35:15.592694941Z" level=info msg="Container 63fca3f0a1f4f15b681fa423ce9b0fcef79f632d0e6197528a69b0fac35578be: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:15.600774 containerd[1530]: time="2025-12-12T17:35:15.600692464Z" level=info msg="CreateContainer within sandbox \"d7b731f9ceb8963dd661fd5ef1565731a5909343f8bcc0f5b0e59b61735be962\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ac1132091af4402db16a10377b92513f79717201c9ba834a9b5d428d84edd96f\"" Dec 12 17:35:15.601842 containerd[1530]: time="2025-12-12T17:35:15.601810306Z" level=info msg="StartContainer for \"ac1132091af4402db16a10377b92513f79717201c9ba834a9b5d428d84edd96f\"" Dec 12 17:35:15.602895 containerd[1530]: time="2025-12-12T17:35:15.602855899Z" level=info msg="CreateContainer within sandbox \"a9ff2567c2d074da6f685fce7376578118cffc320d5c3cad7412e4e5f64d5509\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"63fca3f0a1f4f15b681fa423ce9b0fcef79f632d0e6197528a69b0fac35578be\"" Dec 12 17:35:15.603191 containerd[1530]: time="2025-12-12T17:35:15.603159303Z" level=info msg="connecting to shim ac1132091af4402db16a10377b92513f79717201c9ba834a9b5d428d84edd96f" address="unix:///run/containerd/s/d49ba94848015d71115b257bdd1f77b783aa5726455e2cf8c2b4cabb3a486474" protocol=ttrpc version=3 Dec 12 17:35:15.603523 containerd[1530]: time="2025-12-12T17:35:15.603451465Z" level=info msg="StartContainer for \"63fca3f0a1f4f15b681fa423ce9b0fcef79f632d0e6197528a69b0fac35578be\"" Dec 12 17:35:15.604501 containerd[1530]: time="2025-12-12T17:35:15.604468173Z" level=info msg="connecting to shim 63fca3f0a1f4f15b681fa423ce9b0fcef79f632d0e6197528a69b0fac35578be" address="unix:///run/containerd/s/44abdc72eeb83e8ef05c641ba84c93acffc8d69585c8b7812471c437936526d4" protocol=ttrpc version=3 Dec 12 17:35:15.625981 systemd[1]: Started cri-containerd-ac1132091af4402db16a10377b92513f79717201c9ba834a9b5d428d84edd96f.scope - libcontainer container ac1132091af4402db16a10377b92513f79717201c9ba834a9b5d428d84edd96f. Dec 12 17:35:15.630422 systemd[1]: Started cri-containerd-63fca3f0a1f4f15b681fa423ce9b0fcef79f632d0e6197528a69b0fac35578be.scope - libcontainer container 63fca3f0a1f4f15b681fa423ce9b0fcef79f632d0e6197528a69b0fac35578be. Dec 12 17:35:15.672682 containerd[1530]: time="2025-12-12T17:35:15.672644409Z" level=info msg="StartContainer for \"ac1132091af4402db16a10377b92513f79717201c9ba834a9b5d428d84edd96f\" returns successfully" Dec 12 17:35:15.672856 containerd[1530]: time="2025-12-12T17:35:15.672777229Z" level=info msg="StartContainer for \"63fca3f0a1f4f15b681fa423ce9b0fcef79f632d0e6197528a69b0fac35578be\" returns successfully" Dec 12 17:35:16.619321 kubelet[2685]: I1212 17:35:16.619249 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pxwfs" podStartSLOduration=18.61922076 podStartE2EDuration="18.61922076s" podCreationTimestamp="2025-12-12 17:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:35:16.592387141 +0000 UTC m=+24.214908549" watchObservedRunningTime="2025-12-12 17:35:16.61922076 +0000 UTC m=+24.241742168" Dec 12 17:35:16.634220 kubelet[2685]: I1212 17:35:16.634132 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kfhkg" podStartSLOduration=18.634115275 podStartE2EDuration="18.634115275s" podCreationTimestamp="2025-12-12 17:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:35:16.633519232 +0000 UTC m=+24.256040640" watchObservedRunningTime="2025-12-12 17:35:16.634115275 +0000 UTC m=+24.256636683" Dec 12 17:35:19.701941 kubelet[2685]: I1212 17:35:19.701888 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:35:22.667392 systemd[1]: Started sshd@7-10.0.0.71:22-10.0.0.1:44274.service - OpenSSH per-connection server daemon (10.0.0.1:44274). Dec 12 17:35:22.736131 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 44274 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:22.737917 sshd-session[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:22.743296 systemd-logind[1514]: New session 8 of user core. Dec 12 17:35:22.760062 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:35:22.886941 sshd[4037]: Connection closed by 10.0.0.1 port 44274 Dec 12 17:35:22.887482 sshd-session[4034]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:22.891407 systemd[1]: sshd@7-10.0.0.71:22-10.0.0.1:44274.service: Deactivated successfully. Dec 12 17:35:22.893222 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:35:22.894061 systemd-logind[1514]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:35:22.895465 systemd-logind[1514]: Removed session 8. Dec 12 17:35:27.903488 systemd[1]: Started sshd@8-10.0.0.71:22-10.0.0.1:44284.service - OpenSSH per-connection server daemon (10.0.0.1:44284). Dec 12 17:35:27.956173 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 44284 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:27.957629 sshd-session[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:27.962159 systemd-logind[1514]: New session 9 of user core. Dec 12 17:35:27.974018 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:35:28.099985 sshd[4054]: Connection closed by 10.0.0.1 port 44284 Dec 12 17:35:28.100557 sshd-session[4051]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:28.104734 systemd[1]: sshd@8-10.0.0.71:22-10.0.0.1:44284.service: Deactivated successfully. Dec 12 17:35:28.106541 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:35:28.107468 systemd-logind[1514]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:35:28.108910 systemd-logind[1514]: Removed session 9. Dec 12 17:35:33.113570 systemd[1]: Started sshd@9-10.0.0.71:22-10.0.0.1:52542.service - OpenSSH per-connection server daemon (10.0.0.1:52542). Dec 12 17:35:33.178329 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 52542 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:33.179726 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:33.183870 systemd-logind[1514]: New session 10 of user core. Dec 12 17:35:33.195031 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:35:33.328569 sshd[4074]: Connection closed by 10.0.0.1 port 52542 Dec 12 17:35:33.328984 sshd-session[4071]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:33.333077 systemd[1]: sshd@9-10.0.0.71:22-10.0.0.1:52542.service: Deactivated successfully. Dec 12 17:35:33.336687 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:35:33.341092 systemd-logind[1514]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:35:33.342704 systemd-logind[1514]: Removed session 10. Dec 12 17:35:38.354086 systemd[1]: Started sshd@10-10.0.0.71:22-10.0.0.1:52548.service - OpenSSH per-connection server daemon (10.0.0.1:52548). Dec 12 17:35:38.423015 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 52548 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:38.424407 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:38.435443 systemd-logind[1514]: New session 11 of user core. Dec 12 17:35:38.449018 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:35:38.593547 sshd[4092]: Connection closed by 10.0.0.1 port 52548 Dec 12 17:35:38.595323 sshd-session[4089]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:38.610433 systemd[1]: sshd@10-10.0.0.71:22-10.0.0.1:52548.service: Deactivated successfully. Dec 12 17:35:38.616645 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:35:38.617738 systemd-logind[1514]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:35:38.624162 systemd[1]: Started sshd@11-10.0.0.71:22-10.0.0.1:52556.service - OpenSSH per-connection server daemon (10.0.0.1:52556). Dec 12 17:35:38.626078 systemd-logind[1514]: Removed session 11. Dec 12 17:35:38.682902 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 52556 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:38.684400 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:38.691887 systemd-logind[1514]: New session 12 of user core. Dec 12 17:35:38.705658 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:35:38.866541 sshd[4109]: Connection closed by 10.0.0.1 port 52556 Dec 12 17:35:38.869446 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:38.882530 systemd[1]: sshd@11-10.0.0.71:22-10.0.0.1:52556.service: Deactivated successfully. Dec 12 17:35:38.884908 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:35:38.886417 systemd-logind[1514]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:35:38.888901 systemd-logind[1514]: Removed session 12. Dec 12 17:35:38.890960 systemd[1]: Started sshd@12-10.0.0.71:22-10.0.0.1:52566.service - OpenSSH per-connection server daemon (10.0.0.1:52566). Dec 12 17:35:38.947406 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 52566 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:38.949085 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:38.955047 systemd-logind[1514]: New session 13 of user core. Dec 12 17:35:38.964016 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:35:39.086455 sshd[4124]: Connection closed by 10.0.0.1 port 52566 Dec 12 17:35:39.086767 sshd-session[4121]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:39.091130 systemd[1]: sshd@12-10.0.0.71:22-10.0.0.1:52566.service: Deactivated successfully. Dec 12 17:35:39.092740 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:35:39.093431 systemd-logind[1514]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:35:39.094490 systemd-logind[1514]: Removed session 13. Dec 12 17:35:44.102608 systemd[1]: Started sshd@13-10.0.0.71:22-10.0.0.1:46076.service - OpenSSH per-connection server daemon (10.0.0.1:46076). Dec 12 17:35:44.164166 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 46076 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:44.166492 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:44.172168 systemd-logind[1514]: New session 14 of user core. Dec 12 17:35:44.183809 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:35:44.302400 sshd[4140]: Connection closed by 10.0.0.1 port 46076 Dec 12 17:35:44.302760 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:44.306481 systemd[1]: sshd@13-10.0.0.71:22-10.0.0.1:46076.service: Deactivated successfully. Dec 12 17:35:44.308511 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:35:44.309322 systemd-logind[1514]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:35:44.310417 systemd-logind[1514]: Removed session 14. Dec 12 17:35:49.318563 systemd[1]: Started sshd@14-10.0.0.71:22-10.0.0.1:46090.service - OpenSSH per-connection server daemon (10.0.0.1:46090). Dec 12 17:35:49.376967 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 46090 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:49.378410 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:49.382462 systemd-logind[1514]: New session 15 of user core. Dec 12 17:35:49.395089 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:35:49.522432 sshd[4159]: Connection closed by 10.0.0.1 port 46090 Dec 12 17:35:49.522967 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:49.540309 systemd[1]: sshd@14-10.0.0.71:22-10.0.0.1:46090.service: Deactivated successfully. Dec 12 17:35:49.545216 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:35:49.546875 systemd-logind[1514]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:35:49.550950 systemd[1]: Started sshd@15-10.0.0.71:22-10.0.0.1:46096.service - OpenSSH per-connection server daemon (10.0.0.1:46096). Dec 12 17:35:49.551993 systemd-logind[1514]: Removed session 15. Dec 12 17:35:49.606998 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 46096 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:49.608219 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:49.612423 systemd-logind[1514]: New session 16 of user core. Dec 12 17:35:49.623035 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:35:49.822141 sshd[4175]: Connection closed by 10.0.0.1 port 46096 Dec 12 17:35:49.822777 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:49.843378 systemd[1]: sshd@15-10.0.0.71:22-10.0.0.1:46096.service: Deactivated successfully. Dec 12 17:35:49.846477 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:35:49.847877 systemd-logind[1514]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:35:49.854450 systemd[1]: Started sshd@16-10.0.0.71:22-10.0.0.1:46104.service - OpenSSH per-connection server daemon (10.0.0.1:46104). Dec 12 17:35:49.858415 systemd-logind[1514]: Removed session 16. Dec 12 17:35:49.911650 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 46104 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:49.913217 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:49.920554 systemd-logind[1514]: New session 17 of user core. Dec 12 17:35:49.930031 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:35:50.521216 sshd[4189]: Connection closed by 10.0.0.1 port 46104 Dec 12 17:35:50.521478 sshd-session[4186]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:50.530468 systemd[1]: sshd@16-10.0.0.71:22-10.0.0.1:46104.service: Deactivated successfully. Dec 12 17:35:50.534333 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:35:50.535437 systemd-logind[1514]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:35:50.540108 systemd[1]: Started sshd@17-10.0.0.71:22-10.0.0.1:46114.service - OpenSSH per-connection server daemon (10.0.0.1:46114). Dec 12 17:35:50.540690 systemd-logind[1514]: Removed session 17. Dec 12 17:35:50.612840 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 46114 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:50.613587 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:50.617554 systemd-logind[1514]: New session 18 of user core. Dec 12 17:35:50.622975 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:35:50.851622 sshd[4211]: Connection closed by 10.0.0.1 port 46114 Dec 12 17:35:50.851427 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:50.863185 systemd[1]: sshd@17-10.0.0.71:22-10.0.0.1:46114.service: Deactivated successfully. Dec 12 17:35:50.867639 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:35:50.873199 systemd-logind[1514]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:35:50.876614 systemd[1]: Started sshd@18-10.0.0.71:22-10.0.0.1:46130.service - OpenSSH per-connection server daemon (10.0.0.1:46130). Dec 12 17:35:50.878087 systemd-logind[1514]: Removed session 18. Dec 12 17:35:50.943462 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 46130 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:50.945031 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:50.949107 systemd-logind[1514]: New session 19 of user core. Dec 12 17:35:50.964007 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:35:51.081280 sshd[4226]: Connection closed by 10.0.0.1 port 46130 Dec 12 17:35:51.082020 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:51.085378 systemd[1]: sshd@18-10.0.0.71:22-10.0.0.1:46130.service: Deactivated successfully. Dec 12 17:35:51.088330 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:35:51.090669 systemd-logind[1514]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:35:51.091630 systemd-logind[1514]: Removed session 19. Dec 12 17:35:56.093490 systemd[1]: Started sshd@19-10.0.0.71:22-10.0.0.1:40780.service - OpenSSH per-connection server daemon (10.0.0.1:40780). Dec 12 17:35:56.152608 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 40780 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:56.154255 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:56.159022 systemd-logind[1514]: New session 20 of user core. Dec 12 17:35:56.171058 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:35:56.304225 sshd[4248]: Connection closed by 10.0.0.1 port 40780 Dec 12 17:35:56.304861 sshd-session[4245]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:56.308700 systemd[1]: sshd@19-10.0.0.71:22-10.0.0.1:40780.service: Deactivated successfully. Dec 12 17:35:56.310816 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:35:56.311565 systemd-logind[1514]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:35:56.315848 systemd-logind[1514]: Removed session 20. Dec 12 17:36:01.320907 systemd[1]: Started sshd@20-10.0.0.71:22-10.0.0.1:41786.service - OpenSSH per-connection server daemon (10.0.0.1:41786). Dec 12 17:36:01.391928 sshd[4264]: Accepted publickey for core from 10.0.0.1 port 41786 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:01.393386 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:01.406511 systemd-logind[1514]: New session 21 of user core. Dec 12 17:36:01.418034 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 17:36:01.543659 sshd[4267]: Connection closed by 10.0.0.1 port 41786 Dec 12 17:36:01.545022 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:01.559072 systemd[1]: sshd@20-10.0.0.71:22-10.0.0.1:41786.service: Deactivated successfully. Dec 12 17:36:01.564605 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 17:36:01.567015 systemd-logind[1514]: Session 21 logged out. Waiting for processes to exit. Dec 12 17:36:01.568798 systemd[1]: Started sshd@21-10.0.0.71:22-10.0.0.1:41794.service - OpenSSH per-connection server daemon (10.0.0.1:41794). Dec 12 17:36:01.569889 systemd-logind[1514]: Removed session 21. Dec 12 17:36:01.632739 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 41794 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:01.634097 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:01.638834 systemd-logind[1514]: New session 22 of user core. Dec 12 17:36:01.656040 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 17:36:03.477074 containerd[1530]: time="2025-12-12T17:36:03.476967690Z" level=info msg="StopContainer for \"51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d\" with timeout 30 (s)" Dec 12 17:36:03.479528 kubelet[2685]: E1212 17:36:03.479431 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:03.501880 containerd[1530]: time="2025-12-12T17:36:03.501828392Z" level=info msg="Stop container \"51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d\" with signal terminated" Dec 12 17:36:03.512860 systemd[1]: cri-containerd-51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d.scope: Deactivated successfully. Dec 12 17:36:03.516379 containerd[1530]: time="2025-12-12T17:36:03.516340350Z" level=info msg="received container exit event container_id:\"51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d\" id:\"51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d\" pid:3225 exited_at:{seconds:1765560963 nanos:515943208}" Dec 12 17:36:03.518869 containerd[1530]: time="2025-12-12T17:36:03.518832680Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:36:03.526577 containerd[1530]: time="2025-12-12T17:36:03.526539460Z" level=info msg="StopContainer for \"cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214\" with timeout 2 (s)" Dec 12 17:36:03.526957 containerd[1530]: time="2025-12-12T17:36:03.526931882Z" level=info msg="Stop container \"cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214\" with signal terminated" Dec 12 17:36:03.534211 systemd-networkd[1433]: lxc_health: Link DOWN Dec 12 17:36:03.534538 systemd-networkd[1433]: lxc_health: Lost carrier Dec 12 17:36:03.544165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d-rootfs.mount: Deactivated successfully. Dec 12 17:36:03.553425 systemd[1]: cri-containerd-cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214.scope: Deactivated successfully. Dec 12 17:36:03.553729 systemd[1]: cri-containerd-cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214.scope: Consumed 6.908s CPU time, 121.6M memory peak, 136K read from disk, 12.9M written to disk. Dec 12 17:36:03.554253 containerd[1530]: time="2025-12-12T17:36:03.554192518Z" level=info msg="received container exit event container_id:\"cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214\" id:\"cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214\" pid:3355 exited_at:{seconds:1765560963 nanos:553962368}" Dec 12 17:36:03.562217 containerd[1530]: time="2025-12-12T17:36:03.562178245Z" level=info msg="StopContainer for \"51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d\" returns successfully" Dec 12 17:36:03.562743 containerd[1530]: time="2025-12-12T17:36:03.562719221Z" level=info msg="StopPodSandbox for \"ff12e36a70ea373955971a466c1acac66a783261c94695c48eddc163de177961\"" Dec 12 17:36:03.575623 containerd[1530]: time="2025-12-12T17:36:03.575549974Z" level=info msg="Container to stop \"51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:36:03.576369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214-rootfs.mount: Deactivated successfully. Dec 12 17:36:03.584828 systemd[1]: cri-containerd-ff12e36a70ea373955971a466c1acac66a783261c94695c48eddc163de177961.scope: Deactivated successfully. Dec 12 17:36:03.588211 containerd[1530]: time="2025-12-12T17:36:03.588162097Z" level=info msg="StopContainer for \"cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214\" returns successfully" Dec 12 17:36:03.588652 containerd[1530]: time="2025-12-12T17:36:03.588626036Z" level=info msg="StopPodSandbox for \"3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38\"" Dec 12 17:36:03.588710 containerd[1530]: time="2025-12-12T17:36:03.588691114Z" level=info msg="Container to stop \"37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:36:03.588740 containerd[1530]: time="2025-12-12T17:36:03.588709793Z" level=info msg="Container to stop \"2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:36:03.588740 containerd[1530]: time="2025-12-12T17:36:03.588719352Z" level=info msg="Container to stop \"044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:36:03.588740 containerd[1530]: time="2025-12-12T17:36:03.588728232Z" level=info msg="Container to stop \"af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:36:03.588740 containerd[1530]: time="2025-12-12T17:36:03.588735992Z" level=info msg="Container to stop \"cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:36:03.589334 containerd[1530]: time="2025-12-12T17:36:03.589305166Z" level=info msg="received sandbox exit event container_id:\"ff12e36a70ea373955971a466c1acac66a783261c94695c48eddc163de177961\" id:\"ff12e36a70ea373955971a466c1acac66a783261c94695c48eddc163de177961\" exit_status:137 exited_at:{seconds:1765560963 nanos:589036378}" monitor_name=podsandbox Dec 12 17:36:03.594761 systemd[1]: cri-containerd-3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38.scope: Deactivated successfully. Dec 12 17:36:03.603143 containerd[1530]: time="2025-12-12T17:36:03.603094557Z" level=info msg="received sandbox exit event container_id:\"3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38\" id:\"3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38\" exit_status:137 exited_at:{seconds:1765560963 nanos:602135399}" monitor_name=podsandbox Dec 12 17:36:03.618872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff12e36a70ea373955971a466c1acac66a783261c94695c48eddc163de177961-rootfs.mount: Deactivated successfully. Dec 12 17:36:03.624205 containerd[1530]: time="2025-12-12T17:36:03.623903038Z" level=info msg="shim disconnected" id=ff12e36a70ea373955971a466c1acac66a783261c94695c48eddc163de177961 namespace=k8s.io Dec 12 17:36:03.629093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38-rootfs.mount: Deactivated successfully. Dec 12 17:36:03.633668 containerd[1530]: time="2025-12-12T17:36:03.624131547Z" level=warning msg="cleaning up after shim disconnected" id=ff12e36a70ea373955971a466c1acac66a783261c94695c48eddc163de177961 namespace=k8s.io Dec 12 17:36:03.634016 containerd[1530]: time="2025-12-12T17:36:03.633980832Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 17:36:03.635692 containerd[1530]: time="2025-12-12T17:36:03.635624240Z" level=info msg="shim disconnected" id=3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38 namespace=k8s.io Dec 12 17:36:03.635692 containerd[1530]: time="2025-12-12T17:36:03.635666038Z" level=warning msg="cleaning up after shim disconnected" id=3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38 namespace=k8s.io Dec 12 17:36:03.635692 containerd[1530]: time="2025-12-12T17:36:03.635696276Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 17:36:03.654283 containerd[1530]: time="2025-12-12T17:36:03.654083104Z" level=info msg="received sandbox container exit event sandbox_id:\"ff12e36a70ea373955971a466c1acac66a783261c94695c48eddc163de177961\" exit_status:137 exited_at:{seconds:1765560963 nanos:589036378}" monitor_name=criService Dec 12 17:36:03.654283 containerd[1530]: time="2025-12-12T17:36:03.654169580Z" level=info msg="TearDown network for sandbox \"ff12e36a70ea373955971a466c1acac66a783261c94695c48eddc163de177961\" successfully" Dec 12 17:36:03.654283 containerd[1530]: time="2025-12-12T17:36:03.654194739Z" level=info msg="StopPodSandbox for \"ff12e36a70ea373955971a466c1acac66a783261c94695c48eddc163de177961\" returns successfully" Dec 12 17:36:03.655221 containerd[1530]: time="2025-12-12T17:36:03.655164776Z" level=info msg="received sandbox container exit event sandbox_id:\"3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38\" exit_status:137 exited_at:{seconds:1765560963 nanos:602135399}" monitor_name=criService Dec 12 17:36:03.655806 containerd[1530]: time="2025-12-12T17:36:03.655731951Z" level=info msg="TearDown network for sandbox \"3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38\" successfully" Dec 12 17:36:03.655806 containerd[1530]: time="2025-12-12T17:36:03.655764030Z" level=info msg="StopPodSandbox for \"3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38\" returns successfully" Dec 12 17:36:03.656881 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff12e36a70ea373955971a466c1acac66a783261c94695c48eddc163de177961-shm.mount: Deactivated successfully. Dec 12 17:36:03.705626 kubelet[2685]: I1212 17:36:03.705570 2685 scope.go:117] "RemoveContainer" containerID="51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d" Dec 12 17:36:03.707426 containerd[1530]: time="2025-12-12T17:36:03.707381909Z" level=info msg="RemoveContainer for \"51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d\"" Dec 12 17:36:03.733315 containerd[1530]: time="2025-12-12T17:36:03.733198208Z" level=info msg="RemoveContainer for \"51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d\" returns successfully" Dec 12 17:36:03.733600 kubelet[2685]: I1212 17:36:03.733575 2685 scope.go:117] "RemoveContainer" containerID="51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d" Dec 12 17:36:03.734205 kubelet[2685]: I1212 17:36:03.734182 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-cni-path\") pod \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " Dec 12 17:36:03.734205 kubelet[2685]: I1212 17:36:03.734217 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-etc-cni-netd\") pod \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " Dec 12 17:36:03.734205 kubelet[2685]: I1212 17:36:03.734238 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-cilium-run\") pod \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " Dec 12 17:36:03.734205 kubelet[2685]: I1212 17:36:03.734254 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-host-proc-sys-net\") pod \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " Dec 12 17:36:03.734205 kubelet[2685]: I1212 17:36:03.734270 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-xtables-lock\") pod \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " Dec 12 17:36:03.734205 kubelet[2685]: I1212 17:36:03.734285 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-hostproc\") pod \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " Dec 12 17:36:03.734554 kubelet[2685]: I1212 17:36:03.734301 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-lib-modules\") pod \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " Dec 12 17:36:03.734554 kubelet[2685]: I1212 17:36:03.734321 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54cc8\" (UniqueName: \"kubernetes.io/projected/e8632941-9458-4f24-aa28-61ef0efde20e-kube-api-access-54cc8\") pod \"e8632941-9458-4f24-aa28-61ef0efde20e\" (UID: \"e8632941-9458-4f24-aa28-61ef0efde20e\") " Dec 12 17:36:03.734554 kubelet[2685]: I1212 17:36:03.734337 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-bpf-maps\") pod \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " Dec 12 17:36:03.734554 kubelet[2685]: I1212 17:36:03.734353 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-host-proc-sys-kernel\") pod \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " Dec 12 17:36:03.734554 kubelet[2685]: I1212 17:36:03.734373 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pvpr\" (UniqueName: \"kubernetes.io/projected/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-kube-api-access-6pvpr\") pod \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " Dec 12 17:36:03.734554 kubelet[2685]: I1212 17:36:03.734464 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8632941-9458-4f24-aa28-61ef0efde20e-cilium-config-path\") pod \"e8632941-9458-4f24-aa28-61ef0efde20e\" (UID: \"e8632941-9458-4f24-aa28-61ef0efde20e\") " Dec 12 17:36:03.734880 kubelet[2685]: I1212 17:36:03.734487 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-clustermesh-secrets\") pod \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " Dec 12 17:36:03.734880 kubelet[2685]: I1212 17:36:03.734504 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-cilium-config-path\") pod \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " Dec 12 17:36:03.734880 kubelet[2685]: I1212 17:36:03.734522 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-hubble-tls\") pod \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " Dec 12 17:36:03.734880 kubelet[2685]: I1212 17:36:03.734536 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-cilium-cgroup\") pod \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\" (UID: \"4926d2fa-68cb-4044-b7b0-8dbc13b33cde\") " Dec 12 17:36:03.739494 kubelet[2685]: I1212 17:36:03.739324 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4926d2fa-68cb-4044-b7b0-8dbc13b33cde" (UID: "4926d2fa-68cb-4044-b7b0-8dbc13b33cde"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:36:03.739494 kubelet[2685]: I1212 17:36:03.739365 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4926d2fa-68cb-4044-b7b0-8dbc13b33cde" (UID: "4926d2fa-68cb-4044-b7b0-8dbc13b33cde"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:36:03.739494 kubelet[2685]: I1212 17:36:03.739338 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4926d2fa-68cb-4044-b7b0-8dbc13b33cde" (UID: "4926d2fa-68cb-4044-b7b0-8dbc13b33cde"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:36:03.739494 kubelet[2685]: I1212 17:36:03.739407 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4926d2fa-68cb-4044-b7b0-8dbc13b33cde" (UID: "4926d2fa-68cb-4044-b7b0-8dbc13b33cde"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:36:03.739494 kubelet[2685]: I1212 17:36:03.739416 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4926d2fa-68cb-4044-b7b0-8dbc13b33cde" (UID: "4926d2fa-68cb-4044-b7b0-8dbc13b33cde"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:36:03.739818 kubelet[2685]: I1212 17:36:03.739389 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4926d2fa-68cb-4044-b7b0-8dbc13b33cde" (UID: "4926d2fa-68cb-4044-b7b0-8dbc13b33cde"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:36:03.739818 kubelet[2685]: I1212 17:36:03.739405 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4926d2fa-68cb-4044-b7b0-8dbc13b33cde" (UID: "4926d2fa-68cb-4044-b7b0-8dbc13b33cde"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:36:03.739818 kubelet[2685]: I1212 17:36:03.739434 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4926d2fa-68cb-4044-b7b0-8dbc13b33cde" (UID: "4926d2fa-68cb-4044-b7b0-8dbc13b33cde"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:36:03.739818 kubelet[2685]: I1212 17:36:03.739446 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-hostproc" (OuterVolumeSpecName: "hostproc") pod "4926d2fa-68cb-4044-b7b0-8dbc13b33cde" (UID: "4926d2fa-68cb-4044-b7b0-8dbc13b33cde"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:36:03.740227 kubelet[2685]: I1212 17:36:03.740197 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-cni-path" (OuterVolumeSpecName: "cni-path") pod "4926d2fa-68cb-4044-b7b0-8dbc13b33cde" (UID: "4926d2fa-68cb-4044-b7b0-8dbc13b33cde"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:36:03.744221 kubelet[2685]: I1212 17:36:03.744175 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8632941-9458-4f24-aa28-61ef0efde20e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e8632941-9458-4f24-aa28-61ef0efde20e" (UID: "e8632941-9458-4f24-aa28-61ef0efde20e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:36:03.744516 containerd[1530]: time="2025-12-12T17:36:03.733934936Z" level=error msg="ContainerStatus for \"51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d\": not found" Dec 12 17:36:03.746598 kubelet[2685]: I1212 17:36:03.746564 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4926d2fa-68cb-4044-b7b0-8dbc13b33cde" (UID: "4926d2fa-68cb-4044-b7b0-8dbc13b33cde"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:36:03.746941 kubelet[2685]: I1212 17:36:03.746902 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-kube-api-access-6pvpr" (OuterVolumeSpecName: "kube-api-access-6pvpr") pod "4926d2fa-68cb-4044-b7b0-8dbc13b33cde" (UID: "4926d2fa-68cb-4044-b7b0-8dbc13b33cde"). InnerVolumeSpecName "kube-api-access-6pvpr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:36:03.748698 kubelet[2685]: I1212 17:36:03.748598 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4926d2fa-68cb-4044-b7b0-8dbc13b33cde" (UID: "4926d2fa-68cb-4044-b7b0-8dbc13b33cde"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:36:03.748913 kubelet[2685]: I1212 17:36:03.748882 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8632941-9458-4f24-aa28-61ef0efde20e-kube-api-access-54cc8" (OuterVolumeSpecName: "kube-api-access-54cc8") pod "e8632941-9458-4f24-aa28-61ef0efde20e" (UID: "e8632941-9458-4f24-aa28-61ef0efde20e"). InnerVolumeSpecName "kube-api-access-54cc8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:36:03.749226 kubelet[2685]: E1212 17:36:03.749191 2685 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d\": not found" containerID="51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d" Dec 12 17:36:03.749331 kubelet[2685]: I1212 17:36:03.749297 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4926d2fa-68cb-4044-b7b0-8dbc13b33cde" (UID: "4926d2fa-68cb-4044-b7b0-8dbc13b33cde"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:36:03.749418 kubelet[2685]: I1212 17:36:03.749306 2685 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d"} err="failed to get container status \"51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d\": rpc error: code = NotFound desc = an error occurred when try to find container \"51c0208a6a4fdb6410d4a27958267a5eaae8683518892a536e8d9c2b339d367d\": not found" Dec 12 17:36:03.749478 kubelet[2685]: I1212 17:36:03.749467 2685 scope.go:117] "RemoveContainer" containerID="cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214" Dec 12 17:36:03.751866 containerd[1530]: time="2025-12-12T17:36:03.751451122Z" level=info msg="RemoveContainer for \"cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214\"" Dec 12 17:36:03.755922 containerd[1530]: time="2025-12-12T17:36:03.755884926Z" level=info msg="RemoveContainer for \"cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214\" returns successfully" Dec 12 17:36:03.756165 kubelet[2685]: I1212 17:36:03.756138 2685 scope.go:117] "RemoveContainer" containerID="2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638" Dec 12 17:36:03.757686 containerd[1530]: time="2025-12-12T17:36:03.757655167Z" level=info msg="RemoveContainer for \"2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638\"" Dec 12 17:36:03.769920 containerd[1530]: time="2025-12-12T17:36:03.769858948Z" level=info msg="RemoveContainer for \"2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638\" returns successfully" Dec 12 17:36:03.770167 kubelet[2685]: I1212 17:36:03.770139 2685 scope.go:117] "RemoveContainer" containerID="af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178" Dec 12 17:36:03.772556 containerd[1530]: time="2025-12-12T17:36:03.772510591Z" level=info msg="RemoveContainer for \"af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178\"" Dec 12 17:36:03.776808 containerd[1530]: time="2025-12-12T17:36:03.776737804Z" level=info msg="RemoveContainer for \"af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178\" returns successfully" Dec 12 17:36:03.777012 kubelet[2685]: I1212 17:36:03.776982 2685 scope.go:117] "RemoveContainer" containerID="37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5" Dec 12 17:36:03.778661 containerd[1530]: time="2025-12-12T17:36:03.778635320Z" level=info msg="RemoveContainer for \"37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5\"" Dec 12 17:36:03.784578 containerd[1530]: time="2025-12-12T17:36:03.784534540Z" level=info msg="RemoveContainer for \"37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5\" returns successfully" Dec 12 17:36:03.784853 kubelet[2685]: I1212 17:36:03.784826 2685 scope.go:117] "RemoveContainer" containerID="044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5" Dec 12 17:36:03.786529 containerd[1530]: time="2025-12-12T17:36:03.786500853Z" level=info msg="RemoveContainer for \"044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5\"" Dec 12 17:36:03.789307 containerd[1530]: time="2025-12-12T17:36:03.789277570Z" level=info msg="RemoveContainer for \"044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5\" returns successfully" Dec 12 17:36:03.789575 kubelet[2685]: I1212 17:36:03.789513 2685 scope.go:117] "RemoveContainer" containerID="cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214" Dec 12 17:36:03.789891 containerd[1530]: time="2025-12-12T17:36:03.789852585Z" level=error msg="ContainerStatus for \"cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214\": not found" Dec 12 17:36:03.790176 kubelet[2685]: E1212 17:36:03.790117 2685 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214\": not found" containerID="cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214" Dec 12 17:36:03.790345 kubelet[2685]: I1212 17:36:03.790152 2685 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214"} err="failed to get container status \"cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214\": rpc error: code = NotFound desc = an error occurred when try to find container \"cce948f24f5dad1aa51695a9306889605238cc028a70466bcc0ad9aed4546214\": not found" Dec 12 17:36:03.790345 kubelet[2685]: I1212 17:36:03.790263 2685 scope.go:117] "RemoveContainer" containerID="2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638" Dec 12 17:36:03.790585 containerd[1530]: time="2025-12-12T17:36:03.790552514Z" level=error msg="ContainerStatus for \"2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638\": not found" Dec 12 17:36:03.790701 kubelet[2685]: E1212 17:36:03.790682 2685 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638\": not found" containerID="2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638" Dec 12 17:36:03.790732 kubelet[2685]: I1212 17:36:03.790704 2685 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638"} err="failed to get container status \"2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c22b2482d5f1bba9c99fc46b87254185d8753f9cb4ca60d9d14b76fc0f1a638\": not found" Dec 12 17:36:03.790732 kubelet[2685]: I1212 17:36:03.790720 2685 scope.go:117] "RemoveContainer" containerID="af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178" Dec 12 17:36:03.790949 containerd[1530]: time="2025-12-12T17:36:03.790914298Z" level=error msg="ContainerStatus for \"af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178\": not found" Dec 12 17:36:03.791198 kubelet[2685]: E1212 17:36:03.791054 2685 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178\": not found" containerID="af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178" Dec 12 17:36:03.791198 kubelet[2685]: I1212 17:36:03.791076 2685 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178"} err="failed to get container status \"af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178\": rpc error: code = NotFound desc = an error occurred when try to find container \"af3557857f5964bf30475fb1ecbfbbadaba4826c13678bf94af56e6fa6775178\": not found" Dec 12 17:36:03.791198 kubelet[2685]: I1212 17:36:03.791089 2685 scope.go:117] "RemoveContainer" containerID="37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5" Dec 12 17:36:03.791441 containerd[1530]: time="2025-12-12T17:36:03.791408236Z" level=error msg="ContainerStatus for \"37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5\": not found" Dec 12 17:36:03.791742 kubelet[2685]: E1212 17:36:03.791719 2685 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5\": not found" containerID="37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5" Dec 12 17:36:03.791810 kubelet[2685]: I1212 17:36:03.791747 2685 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5"} err="failed to get container status \"37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"37161eb3e5abb6508f7f82ca271fe4cdbea18305a6bde00b5835a003fd8978d5\": not found" Dec 12 17:36:03.791810 kubelet[2685]: I1212 17:36:03.791764 2685 scope.go:117] "RemoveContainer" containerID="044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5" Dec 12 17:36:03.792203 containerd[1530]: time="2025-12-12T17:36:03.792134884Z" level=error msg="ContainerStatus for \"044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5\": not found" Dec 12 17:36:03.792319 kubelet[2685]: E1212 17:36:03.792299 2685 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5\": not found" containerID="044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5" Dec 12 17:36:03.792359 kubelet[2685]: I1212 17:36:03.792326 2685 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5"} err="failed to get container status \"044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"044cffe5965985e30d9fa775665a11faf4569193014c9d54c169c2136148f6a5\": not found" Dec 12 17:36:03.835804 kubelet[2685]: I1212 17:36:03.835760 2685 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 12 17:36:03.835804 kubelet[2685]: I1212 17:36:03.835815 2685 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 12 17:36:03.835976 kubelet[2685]: I1212 17:36:03.835828 2685 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6pvpr\" (UniqueName: \"kubernetes.io/projected/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-kube-api-access-6pvpr\") on node \"localhost\" DevicePath \"\"" Dec 12 17:36:03.835976 kubelet[2685]: I1212 17:36:03.835842 2685 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8632941-9458-4f24-aa28-61ef0efde20e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 12 17:36:03.835976 kubelet[2685]: I1212 17:36:03.835850 2685 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 12 17:36:03.835976 kubelet[2685]: I1212 17:36:03.835858 2685 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 12 17:36:03.835976 kubelet[2685]: I1212 17:36:03.835866 2685 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 12 17:36:03.835976 kubelet[2685]: I1212 17:36:03.835874 2685 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 12 17:36:03.835976 kubelet[2685]: I1212 17:36:03.835882 2685 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 12 17:36:03.835976 kubelet[2685]: I1212 17:36:03.835890 2685 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 12 17:36:03.836157 kubelet[2685]: I1212 17:36:03.835897 2685 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 12 17:36:03.836157 kubelet[2685]: I1212 17:36:03.835906 2685 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 12 17:36:03.836157 kubelet[2685]: I1212 17:36:03.835913 2685 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 12 17:36:03.836157 kubelet[2685]: I1212 17:36:03.835920 2685 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 12 17:36:03.836157 kubelet[2685]: I1212 17:36:03.835927 2685 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4926d2fa-68cb-4044-b7b0-8dbc13b33cde-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 12 17:36:03.836157 kubelet[2685]: I1212 17:36:03.835935 2685 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-54cc8\" (UniqueName: \"kubernetes.io/projected/e8632941-9458-4f24-aa28-61ef0efde20e-kube-api-access-54cc8\") on node \"localhost\" DevicePath \"\"" Dec 12 17:36:04.000670 systemd[1]: Removed slice kubepods-besteffort-pode8632941_9458_4f24_aa28_61ef0efde20e.slice - libcontainer container kubepods-besteffort-pode8632941_9458_4f24_aa28_61ef0efde20e.slice. Dec 12 17:36:04.012429 systemd[1]: Removed slice kubepods-burstable-pod4926d2fa_68cb_4044_b7b0_8dbc13b33cde.slice - libcontainer container kubepods-burstable-pod4926d2fa_68cb_4044_b7b0_8dbc13b33cde.slice. Dec 12 17:36:04.013977 systemd[1]: kubepods-burstable-pod4926d2fa_68cb_4044_b7b0_8dbc13b33cde.slice: Consumed 7.006s CPU time, 121.9M memory peak, 140K read from disk, 15.2M written to disk. Dec 12 17:36:04.481865 kubelet[2685]: I1212 17:36:04.481483 2685 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4926d2fa-68cb-4044-b7b0-8dbc13b33cde" path="/var/lib/kubelet/pods/4926d2fa-68cb-4044-b7b0-8dbc13b33cde/volumes" Dec 12 17:36:04.482190 kubelet[2685]: I1212 17:36:04.482073 2685 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8632941-9458-4f24-aa28-61ef0efde20e" path="/var/lib/kubelet/pods/e8632941-9458-4f24-aa28-61ef0efde20e/volumes" Dec 12 17:36:04.543681 systemd[1]: var-lib-kubelet-pods-e8632941\x2d9458\x2d4f24\x2daa28\x2d61ef0efde20e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d54cc8.mount: Deactivated successfully. Dec 12 17:36:04.543816 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d9ef34d77a5451d80a2eba6f3384a5de3a8800d554006737bc68a77b3458f38-shm.mount: Deactivated successfully. Dec 12 17:36:04.543874 systemd[1]: var-lib-kubelet-pods-4926d2fa\x2d68cb\x2d4044\x2db7b0\x2d8dbc13b33cde-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6pvpr.mount: Deactivated successfully. Dec 12 17:36:04.543931 systemd[1]: var-lib-kubelet-pods-4926d2fa\x2d68cb\x2d4044\x2db7b0\x2d8dbc13b33cde-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 12 17:36:04.543987 systemd[1]: var-lib-kubelet-pods-4926d2fa\x2d68cb\x2d4044\x2db7b0\x2d8dbc13b33cde-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 12 17:36:05.412193 sshd[4283]: Connection closed by 10.0.0.1 port 41794 Dec 12 17:36:05.413385 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:05.428134 systemd[1]: sshd@21-10.0.0.71:22-10.0.0.1:41794.service: Deactivated successfully. Dec 12 17:36:05.430465 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 17:36:05.430649 systemd[1]: session-22.scope: Consumed 1.085s CPU time, 22.9M memory peak. Dec 12 17:36:05.433844 systemd-logind[1514]: Session 22 logged out. Waiting for processes to exit. Dec 12 17:36:05.436658 systemd[1]: Started sshd@22-10.0.0.71:22-10.0.0.1:41800.service - OpenSSH per-connection server daemon (10.0.0.1:41800). Dec 12 17:36:05.439657 systemd-logind[1514]: Removed session 22. Dec 12 17:36:05.488989 sshd[4429]: Accepted publickey for core from 10.0.0.1 port 41800 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:05.490308 sshd-session[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:05.496964 systemd-logind[1514]: New session 23 of user core. Dec 12 17:36:05.506987 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 17:36:06.459815 sshd[4433]: Connection closed by 10.0.0.1 port 41800 Dec 12 17:36:06.460399 sshd-session[4429]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:06.469420 systemd[1]: sshd@22-10.0.0.71:22-10.0.0.1:41800.service: Deactivated successfully. Dec 12 17:36:06.472217 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 17:36:06.474699 systemd-logind[1514]: Session 23 logged out. Waiting for processes to exit. Dec 12 17:36:06.478599 systemd[1]: Started sshd@23-10.0.0.71:22-10.0.0.1:41804.service - OpenSSH per-connection server daemon (10.0.0.1:41804). Dec 12 17:36:06.481510 systemd-logind[1514]: Removed session 23. Dec 12 17:36:06.521138 systemd[1]: Created slice kubepods-burstable-pod4c6d7d9d_f878_43e2_aea9_66d559f1e623.slice - libcontainer container kubepods-burstable-pod4c6d7d9d_f878_43e2_aea9_66d559f1e623.slice. Dec 12 17:36:06.547776 sshd[4447]: Accepted publickey for core from 10.0.0.1 port 41804 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:06.548671 sshd-session[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:06.551787 kubelet[2685]: I1212 17:36:06.551745 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c6d7d9d-f878-43e2-aea9-66d559f1e623-cni-path\") pod \"cilium-k4894\" (UID: \"4c6d7d9d-f878-43e2-aea9-66d559f1e623\") " pod="kube-system/cilium-k4894" Dec 12 17:36:06.552001 kubelet[2685]: I1212 17:36:06.551792 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c6d7d9d-f878-43e2-aea9-66d559f1e623-hubble-tls\") pod \"cilium-k4894\" (UID: \"4c6d7d9d-f878-43e2-aea9-66d559f1e623\") " pod="kube-system/cilium-k4894" Dec 12 17:36:06.552001 kubelet[2685]: I1212 17:36:06.551843 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c6d7d9d-f878-43e2-aea9-66d559f1e623-cilium-config-path\") pod \"cilium-k4894\" (UID: \"4c6d7d9d-f878-43e2-aea9-66d559f1e623\") " pod="kube-system/cilium-k4894" Dec 12 17:36:06.552001 kubelet[2685]: I1212 17:36:06.551860 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c6d7d9d-f878-43e2-aea9-66d559f1e623-host-proc-sys-kernel\") pod \"cilium-k4894\" (UID: \"4c6d7d9d-f878-43e2-aea9-66d559f1e623\") " pod="kube-system/cilium-k4894" Dec 12 17:36:06.552001 kubelet[2685]: I1212 17:36:06.551885 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c6d7d9d-f878-43e2-aea9-66d559f1e623-clustermesh-secrets\") pod \"cilium-k4894\" (UID: \"4c6d7d9d-f878-43e2-aea9-66d559f1e623\") " pod="kube-system/cilium-k4894" Dec 12 17:36:06.552001 kubelet[2685]: I1212 17:36:06.551911 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqltw\" (UniqueName: \"kubernetes.io/projected/4c6d7d9d-f878-43e2-aea9-66d559f1e623-kube-api-access-pqltw\") pod \"cilium-k4894\" (UID: \"4c6d7d9d-f878-43e2-aea9-66d559f1e623\") " pod="kube-system/cilium-k4894" Dec 12 17:36:06.552114 kubelet[2685]: I1212 17:36:06.551960 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c6d7d9d-f878-43e2-aea9-66d559f1e623-cilium-run\") pod \"cilium-k4894\" (UID: \"4c6d7d9d-f878-43e2-aea9-66d559f1e623\") " pod="kube-system/cilium-k4894" Dec 12 17:36:06.552114 kubelet[2685]: I1212 17:36:06.551992 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c6d7d9d-f878-43e2-aea9-66d559f1e623-xtables-lock\") pod \"cilium-k4894\" (UID: \"4c6d7d9d-f878-43e2-aea9-66d559f1e623\") " pod="kube-system/cilium-k4894" Dec 12 17:36:06.552114 kubelet[2685]: I1212 17:36:06.552012 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4c6d7d9d-f878-43e2-aea9-66d559f1e623-cilium-ipsec-secrets\") pod \"cilium-k4894\" (UID: \"4c6d7d9d-f878-43e2-aea9-66d559f1e623\") " pod="kube-system/cilium-k4894" Dec 12 17:36:06.552114 kubelet[2685]: I1212 17:36:06.552034 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c6d7d9d-f878-43e2-aea9-66d559f1e623-host-proc-sys-net\") pod \"cilium-k4894\" (UID: \"4c6d7d9d-f878-43e2-aea9-66d559f1e623\") " pod="kube-system/cilium-k4894" Dec 12 17:36:06.552114 kubelet[2685]: I1212 17:36:06.552062 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c6d7d9d-f878-43e2-aea9-66d559f1e623-hostproc\") pod \"cilium-k4894\" (UID: \"4c6d7d9d-f878-43e2-aea9-66d559f1e623\") " pod="kube-system/cilium-k4894" Dec 12 17:36:06.552114 kubelet[2685]: I1212 17:36:06.552081 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c6d7d9d-f878-43e2-aea9-66d559f1e623-cilium-cgroup\") pod \"cilium-k4894\" (UID: \"4c6d7d9d-f878-43e2-aea9-66d559f1e623\") " pod="kube-system/cilium-k4894" Dec 12 17:36:06.552226 kubelet[2685]: I1212 17:36:06.552109 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c6d7d9d-f878-43e2-aea9-66d559f1e623-etc-cni-netd\") pod \"cilium-k4894\" (UID: \"4c6d7d9d-f878-43e2-aea9-66d559f1e623\") " pod="kube-system/cilium-k4894" Dec 12 17:36:06.552226 kubelet[2685]: I1212 17:36:06.552125 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c6d7d9d-f878-43e2-aea9-66d559f1e623-lib-modules\") pod \"cilium-k4894\" (UID: \"4c6d7d9d-f878-43e2-aea9-66d559f1e623\") " pod="kube-system/cilium-k4894" Dec 12 17:36:06.552226 kubelet[2685]: I1212 17:36:06.552159 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c6d7d9d-f878-43e2-aea9-66d559f1e623-bpf-maps\") pod \"cilium-k4894\" (UID: \"4c6d7d9d-f878-43e2-aea9-66d559f1e623\") " pod="kube-system/cilium-k4894" Dec 12 17:36:06.552499 systemd-logind[1514]: New session 24 of user core. Dec 12 17:36:06.562992 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 17:36:06.612386 sshd[4450]: Connection closed by 10.0.0.1 port 41804 Dec 12 17:36:06.613602 sshd-session[4447]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:06.626265 systemd[1]: sshd@23-10.0.0.71:22-10.0.0.1:41804.service: Deactivated successfully. Dec 12 17:36:06.629341 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 17:36:06.630324 systemd-logind[1514]: Session 24 logged out. Waiting for processes to exit. Dec 12 17:36:06.633298 systemd[1]: Started sshd@24-10.0.0.71:22-10.0.0.1:41818.service - OpenSSH per-connection server daemon (10.0.0.1:41818). Dec 12 17:36:06.633835 systemd-logind[1514]: Removed session 24. Dec 12 17:36:06.704400 sshd[4457]: Accepted publickey for core from 10.0.0.1 port 41818 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:06.707876 sshd-session[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:06.712876 systemd-logind[1514]: New session 25 of user core. Dec 12 17:36:06.718968 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 17:36:06.828275 kubelet[2685]: E1212 17:36:06.828234 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:06.828895 containerd[1530]: time="2025-12-12T17:36:06.828838095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k4894,Uid:4c6d7d9d-f878-43e2-aea9-66d559f1e623,Namespace:kube-system,Attempt:0,}" Dec 12 17:36:06.856012 containerd[1530]: time="2025-12-12T17:36:06.855966654Z" level=info msg="connecting to shim b4f3dda47ccc0dbfab9122e1adaebd13c125b7d1b0234542869941d00ab34045" address="unix:///run/containerd/s/cb0d06efd5981f3c8cb1d6256bf04ca7c32f732ea7e8d69a210ce1ddbf069359" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:36:06.876987 systemd[1]: Started cri-containerd-b4f3dda47ccc0dbfab9122e1adaebd13c125b7d1b0234542869941d00ab34045.scope - libcontainer container b4f3dda47ccc0dbfab9122e1adaebd13c125b7d1b0234542869941d00ab34045. Dec 12 17:36:06.910309 containerd[1530]: time="2025-12-12T17:36:06.910256369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k4894,Uid:4c6d7d9d-f878-43e2-aea9-66d559f1e623,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4f3dda47ccc0dbfab9122e1adaebd13c125b7d1b0234542869941d00ab34045\"" Dec 12 17:36:06.911050 kubelet[2685]: E1212 17:36:06.911015 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:06.918769 containerd[1530]: time="2025-12-12T17:36:06.918703737Z" level=info msg="CreateContainer within sandbox \"b4f3dda47ccc0dbfab9122e1adaebd13c125b7d1b0234542869941d00ab34045\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 17:36:06.928199 containerd[1530]: time="2025-12-12T17:36:06.928161148Z" level=info msg="Container a64d21a4ccea4ca9567925f2e1c9dc13eaa417ac7817b079bf97ecd63a04b9f6: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:06.934274 containerd[1530]: time="2025-12-12T17:36:06.934225084Z" level=info msg="CreateContainer within sandbox \"b4f3dda47ccc0dbfab9122e1adaebd13c125b7d1b0234542869941d00ab34045\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a64d21a4ccea4ca9567925f2e1c9dc13eaa417ac7817b079bf97ecd63a04b9f6\"" Dec 12 17:36:06.935810 containerd[1530]: time="2025-12-12T17:36:06.935184769Z" level=info msg="StartContainer for \"a64d21a4ccea4ca9567925f2e1c9dc13eaa417ac7817b079bf97ecd63a04b9f6\"" Dec 12 17:36:06.936002 containerd[1530]: time="2025-12-12T17:36:06.935966060Z" level=info msg="connecting to shim a64d21a4ccea4ca9567925f2e1c9dc13eaa417ac7817b079bf97ecd63a04b9f6" address="unix:///run/containerd/s/cb0d06efd5981f3c8cb1d6256bf04ca7c32f732ea7e8d69a210ce1ddbf069359" protocol=ttrpc version=3 Dec 12 17:36:06.955963 systemd[1]: Started cri-containerd-a64d21a4ccea4ca9567925f2e1c9dc13eaa417ac7817b079bf97ecd63a04b9f6.scope - libcontainer container a64d21a4ccea4ca9567925f2e1c9dc13eaa417ac7817b079bf97ecd63a04b9f6. Dec 12 17:36:06.988358 containerd[1530]: time="2025-12-12T17:36:06.988230330Z" level=info msg="StartContainer for \"a64d21a4ccea4ca9567925f2e1c9dc13eaa417ac7817b079bf97ecd63a04b9f6\" returns successfully" Dec 12 17:36:06.998211 systemd[1]: cri-containerd-a64d21a4ccea4ca9567925f2e1c9dc13eaa417ac7817b079bf97ecd63a04b9f6.scope: Deactivated successfully. Dec 12 17:36:07.000183 containerd[1530]: time="2025-12-12T17:36:07.000130211Z" level=info msg="received container exit event container_id:\"a64d21a4ccea4ca9567925f2e1c9dc13eaa417ac7817b079bf97ecd63a04b9f6\" id:\"a64d21a4ccea4ca9567925f2e1c9dc13eaa417ac7817b079bf97ecd63a04b9f6\" pid:4530 exited_at:{seconds:1765560966 nanos:999868460}" Dec 12 17:36:07.478213 kubelet[2685]: E1212 17:36:07.478105 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:07.530763 kubelet[2685]: E1212 17:36:07.530725 2685 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 12 17:36:07.713702 kubelet[2685]: E1212 17:36:07.713650 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:07.724301 containerd[1530]: time="2025-12-12T17:36:07.724251438Z" level=info msg="CreateContainer within sandbox \"b4f3dda47ccc0dbfab9122e1adaebd13c125b7d1b0234542869941d00ab34045\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 17:36:07.733498 containerd[1530]: time="2025-12-12T17:36:07.732821781Z" level=info msg="Container a336222ee45b418f65bc3c1b0c49047d24696c1e6dbdb91f00a6451ecfcf75a4: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:07.735268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4029061625.mount: Deactivated successfully. Dec 12 17:36:07.742695 containerd[1530]: time="2025-12-12T17:36:07.742575043Z" level=info msg="CreateContainer within sandbox \"b4f3dda47ccc0dbfab9122e1adaebd13c125b7d1b0234542869941d00ab34045\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a336222ee45b418f65bc3c1b0c49047d24696c1e6dbdb91f00a6451ecfcf75a4\"" Dec 12 17:36:07.743205 containerd[1530]: time="2025-12-12T17:36:07.743101625Z" level=info msg="StartContainer for \"a336222ee45b418f65bc3c1b0c49047d24696c1e6dbdb91f00a6451ecfcf75a4\"" Dec 12 17:36:07.744068 containerd[1530]: time="2025-12-12T17:36:07.744043353Z" level=info msg="connecting to shim a336222ee45b418f65bc3c1b0c49047d24696c1e6dbdb91f00a6451ecfcf75a4" address="unix:///run/containerd/s/cb0d06efd5981f3c8cb1d6256bf04ca7c32f732ea7e8d69a210ce1ddbf069359" protocol=ttrpc version=3 Dec 12 17:36:07.772027 systemd[1]: Started cri-containerd-a336222ee45b418f65bc3c1b0c49047d24696c1e6dbdb91f00a6451ecfcf75a4.scope - libcontainer container a336222ee45b418f65bc3c1b0c49047d24696c1e6dbdb91f00a6451ecfcf75a4. Dec 12 17:36:07.804906 containerd[1530]: time="2025-12-12T17:36:07.804864445Z" level=info msg="StartContainer for \"a336222ee45b418f65bc3c1b0c49047d24696c1e6dbdb91f00a6451ecfcf75a4\" returns successfully" Dec 12 17:36:07.809675 systemd[1]: cri-containerd-a336222ee45b418f65bc3c1b0c49047d24696c1e6dbdb91f00a6451ecfcf75a4.scope: Deactivated successfully. Dec 12 17:36:07.810390 containerd[1530]: time="2025-12-12T17:36:07.810318496Z" level=info msg="received container exit event container_id:\"a336222ee45b418f65bc3c1b0c49047d24696c1e6dbdb91f00a6451ecfcf75a4\" id:\"a336222ee45b418f65bc3c1b0c49047d24696c1e6dbdb91f00a6451ecfcf75a4\" pid:4576 exited_at:{seconds:1765560967 nanos:810032946}" Dec 12 17:36:07.831529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a336222ee45b418f65bc3c1b0c49047d24696c1e6dbdb91f00a6451ecfcf75a4-rootfs.mount: Deactivated successfully. Dec 12 17:36:08.718264 kubelet[2685]: E1212 17:36:08.718189 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:08.723728 containerd[1530]: time="2025-12-12T17:36:08.723684319Z" level=info msg="CreateContainer within sandbox \"b4f3dda47ccc0dbfab9122e1adaebd13c125b7d1b0234542869941d00ab34045\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 17:36:08.737282 containerd[1530]: time="2025-12-12T17:36:08.737193441Z" level=info msg="Container 8d438d2531d5eb8563d3ff43cfb6994ef6d35a03e17e53bb620d15353f1c14d5: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:08.742613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3592061023.mount: Deactivated successfully. Dec 12 17:36:08.746114 containerd[1530]: time="2025-12-12T17:36:08.746065913Z" level=info msg="CreateContainer within sandbox \"b4f3dda47ccc0dbfab9122e1adaebd13c125b7d1b0234542869941d00ab34045\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8d438d2531d5eb8563d3ff43cfb6994ef6d35a03e17e53bb620d15353f1c14d5\"" Dec 12 17:36:08.746532 containerd[1530]: time="2025-12-12T17:36:08.746510258Z" level=info msg="StartContainer for \"8d438d2531d5eb8563d3ff43cfb6994ef6d35a03e17e53bb620d15353f1c14d5\"" Dec 12 17:36:08.747922 containerd[1530]: time="2025-12-12T17:36:08.747876254Z" level=info msg="connecting to shim 8d438d2531d5eb8563d3ff43cfb6994ef6d35a03e17e53bb620d15353f1c14d5" address="unix:///run/containerd/s/cb0d06efd5981f3c8cb1d6256bf04ca7c32f732ea7e8d69a210ce1ddbf069359" protocol=ttrpc version=3 Dec 12 17:36:08.768298 systemd[1]: Started cri-containerd-8d438d2531d5eb8563d3ff43cfb6994ef6d35a03e17e53bb620d15353f1c14d5.scope - libcontainer container 8d438d2531d5eb8563d3ff43cfb6994ef6d35a03e17e53bb620d15353f1c14d5. Dec 12 17:36:08.836024 containerd[1530]: time="2025-12-12T17:36:08.835975035Z" level=info msg="StartContainer for \"8d438d2531d5eb8563d3ff43cfb6994ef6d35a03e17e53bb620d15353f1c14d5\" returns successfully" Dec 12 17:36:08.837553 systemd[1]: cri-containerd-8d438d2531d5eb8563d3ff43cfb6994ef6d35a03e17e53bb620d15353f1c14d5.scope: Deactivated successfully. Dec 12 17:36:08.839342 containerd[1530]: time="2025-12-12T17:36:08.839212210Z" level=info msg="received container exit event container_id:\"8d438d2531d5eb8563d3ff43cfb6994ef6d35a03e17e53bb620d15353f1c14d5\" id:\"8d438d2531d5eb8563d3ff43cfb6994ef6d35a03e17e53bb620d15353f1c14d5\" pid:4621 exited_at:{seconds:1765560968 nanos:838463115}" Dec 12 17:36:09.722463 kubelet[2685]: E1212 17:36:09.722394 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:09.727275 containerd[1530]: time="2025-12-12T17:36:09.726747311Z" level=info msg="CreateContainer within sandbox \"b4f3dda47ccc0dbfab9122e1adaebd13c125b7d1b0234542869941d00ab34045\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 17:36:09.734016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d438d2531d5eb8563d3ff43cfb6994ef6d35a03e17e53bb620d15353f1c14d5-rootfs.mount: Deactivated successfully. Dec 12 17:36:09.738734 containerd[1530]: time="2025-12-12T17:36:09.738116406Z" level=info msg="Container 307deb7a73b4b38bf6ab2784a237b76edf1b93b1ae34a722cb48d981474e8b4f: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:09.749442 containerd[1530]: time="2025-12-12T17:36:09.749401344Z" level=info msg="CreateContainer within sandbox \"b4f3dda47ccc0dbfab9122e1adaebd13c125b7d1b0234542869941d00ab34045\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"307deb7a73b4b38bf6ab2784a237b76edf1b93b1ae34a722cb48d981474e8b4f\"" Dec 12 17:36:09.750136 containerd[1530]: time="2025-12-12T17:36:09.750049724Z" level=info msg="StartContainer for \"307deb7a73b4b38bf6ab2784a237b76edf1b93b1ae34a722cb48d981474e8b4f\"" Dec 12 17:36:09.751243 containerd[1530]: time="2025-12-12T17:36:09.751208929Z" level=info msg="connecting to shim 307deb7a73b4b38bf6ab2784a237b76edf1b93b1ae34a722cb48d981474e8b4f" address="unix:///run/containerd/s/cb0d06efd5981f3c8cb1d6256bf04ca7c32f732ea7e8d69a210ce1ddbf069359" protocol=ttrpc version=3 Dec 12 17:36:09.773939 systemd[1]: Started cri-containerd-307deb7a73b4b38bf6ab2784a237b76edf1b93b1ae34a722cb48d981474e8b4f.scope - libcontainer container 307deb7a73b4b38bf6ab2784a237b76edf1b93b1ae34a722cb48d981474e8b4f. Dec 12 17:36:09.796409 systemd[1]: cri-containerd-307deb7a73b4b38bf6ab2784a237b76edf1b93b1ae34a722cb48d981474e8b4f.scope: Deactivated successfully. Dec 12 17:36:09.800395 containerd[1530]: time="2025-12-12T17:36:09.800230243Z" level=info msg="received container exit event container_id:\"307deb7a73b4b38bf6ab2784a237b76edf1b93b1ae34a722cb48d981474e8b4f\" id:\"307deb7a73b4b38bf6ab2784a237b76edf1b93b1ae34a722cb48d981474e8b4f\" pid:4659 exited_at:{seconds:1765560969 nanos:797134777}" Dec 12 17:36:09.812911 containerd[1530]: time="2025-12-12T17:36:09.808025167Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c6d7d9d_f878_43e2_aea9_66d559f1e623.slice/cri-containerd-307deb7a73b4b38bf6ab2784a237b76edf1b93b1ae34a722cb48d981474e8b4f.scope/memory.events\": no such file or directory" Dec 12 17:36:09.823324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-307deb7a73b4b38bf6ab2784a237b76edf1b93b1ae34a722cb48d981474e8b4f-rootfs.mount: Deactivated successfully. Dec 12 17:36:09.827516 containerd[1530]: time="2025-12-12T17:36:09.827465017Z" level=info msg="StartContainer for \"307deb7a73b4b38bf6ab2784a237b76edf1b93b1ae34a722cb48d981474e8b4f\" returns successfully" Dec 12 17:36:10.731069 kubelet[2685]: E1212 17:36:10.730945 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:10.739289 containerd[1530]: time="2025-12-12T17:36:10.738952389Z" level=info msg="CreateContainer within sandbox \"b4f3dda47ccc0dbfab9122e1adaebd13c125b7d1b0234542869941d00ab34045\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 17:36:10.759807 containerd[1530]: time="2025-12-12T17:36:10.759553447Z" level=info msg="Container 382f2c78890196ce63f56f33fbf76fc8cbdf27f7e923c0cf900a7b7aed92a47f: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:10.780471 containerd[1530]: time="2025-12-12T17:36:10.780410977Z" level=info msg="CreateContainer within sandbox \"b4f3dda47ccc0dbfab9122e1adaebd13c125b7d1b0234542869941d00ab34045\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"382f2c78890196ce63f56f33fbf76fc8cbdf27f7e923c0cf900a7b7aed92a47f\"" Dec 12 17:36:10.783653 containerd[1530]: time="2025-12-12T17:36:10.783613647Z" level=info msg="StartContainer for \"382f2c78890196ce63f56f33fbf76fc8cbdf27f7e923c0cf900a7b7aed92a47f\"" Dec 12 17:36:10.785868 containerd[1530]: time="2025-12-12T17:36:10.785762146Z" level=info msg="connecting to shim 382f2c78890196ce63f56f33fbf76fc8cbdf27f7e923c0cf900a7b7aed92a47f" address="unix:///run/containerd/s/cb0d06efd5981f3c8cb1d6256bf04ca7c32f732ea7e8d69a210ce1ddbf069359" protocol=ttrpc version=3 Dec 12 17:36:10.808014 systemd[1]: Started cri-containerd-382f2c78890196ce63f56f33fbf76fc8cbdf27f7e923c0cf900a7b7aed92a47f.scope - libcontainer container 382f2c78890196ce63f56f33fbf76fc8cbdf27f7e923c0cf900a7b7aed92a47f. Dec 12 17:36:10.860955 containerd[1530]: time="2025-12-12T17:36:10.860906743Z" level=info msg="StartContainer for \"382f2c78890196ce63f56f33fbf76fc8cbdf27f7e923c0cf900a7b7aed92a47f\" returns successfully" Dec 12 17:36:11.135808 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 12 17:36:11.739344 kubelet[2685]: E1212 17:36:11.739273 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:12.829750 kubelet[2685]: E1212 17:36:12.829709 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:14.115360 systemd-networkd[1433]: lxc_health: Link UP Dec 12 17:36:14.116079 systemd-networkd[1433]: lxc_health: Gained carrier Dec 12 17:36:14.479524 kubelet[2685]: E1212 17:36:14.479300 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:14.830271 kubelet[2685]: E1212 17:36:14.830122 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:14.888040 kubelet[2685]: I1212 17:36:14.887955 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k4894" podStartSLOduration=8.887847864 podStartE2EDuration="8.887847864s" podCreationTimestamp="2025-12-12 17:36:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:36:11.75626992 +0000 UTC m=+79.378791328" watchObservedRunningTime="2025-12-12 17:36:14.887847864 +0000 UTC m=+82.510369272" Dec 12 17:36:15.748266 kubelet[2685]: E1212 17:36:15.748165 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:15.971975 systemd-networkd[1433]: lxc_health: Gained IPv6LL Dec 12 17:36:16.749798 kubelet[2685]: E1212 17:36:16.749751 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:19.583811 sshd[4464]: Connection closed by 10.0.0.1 port 41818 Dec 12 17:36:19.584564 sshd-session[4457]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:19.588530 systemd[1]: sshd@24-10.0.0.71:22-10.0.0.1:41818.service: Deactivated successfully. Dec 12 17:36:19.591690 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 17:36:19.592460 systemd-logind[1514]: Session 25 logged out. Waiting for processes to exit. Dec 12 17:36:19.595007 systemd-logind[1514]: Removed session 25.