Dec 16 12:32:44.799490 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 16 12:32:44.799511 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 16 12:32:44.799521 kernel: KASLR enabled Dec 16 12:32:44.799526 kernel: efi: EFI v2.7 by EDK II Dec 16 12:32:44.799531 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Dec 16 12:32:44.799537 kernel: random: crng init done Dec 16 12:32:44.799543 kernel: secureboot: Secure boot disabled Dec 16 12:32:44.799549 kernel: ACPI: Early table checksum verification disabled Dec 16 12:32:44.799555 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Dec 16 12:32:44.799562 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 16 12:32:44.799568 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:32:44.799574 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:32:44.799580 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:32:44.799586 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:32:44.799593 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:32:44.799600 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:32:44.799606 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:32:44.799612 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:32:44.799618 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:32:44.799624 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 16 12:32:44.799630 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:32:44.799636 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:32:44.799642 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Dec 16 12:32:44.799648 kernel: Zone ranges: Dec 16 12:32:44.799654 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:32:44.799661 kernel: DMA32 empty Dec 16 12:32:44.799667 kernel: Normal empty Dec 16 12:32:44.799673 kernel: Device empty Dec 16 12:32:44.799679 kernel: Movable zone start for each node Dec 16 12:32:44.799685 kernel: Early memory node ranges Dec 16 12:32:44.799691 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Dec 16 12:32:44.799697 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Dec 16 12:32:44.799703 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Dec 16 12:32:44.799709 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Dec 16 12:32:44.799715 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Dec 16 12:32:44.799721 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Dec 16 12:32:44.799727 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Dec 16 12:32:44.799734 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Dec 16 12:32:44.799741 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Dec 16 12:32:44.799747 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 16 12:32:44.799756 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 16 12:32:44.799762 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 16 12:32:44.799840 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 16 12:32:44.799851 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:32:44.799857 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 16 12:32:44.799864 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Dec 16 12:32:44.799870 kernel: psci: probing for conduit method from ACPI. Dec 16 12:32:44.799877 kernel: psci: PSCIv1.1 detected in firmware. Dec 16 12:32:44.799883 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:32:44.799889 kernel: psci: Trusted OS migration not required Dec 16 12:32:44.799895 kernel: psci: SMC Calling Convention v1.1 Dec 16 12:32:44.799902 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 16 12:32:44.799908 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:32:44.799926 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:32:44.799932 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 16 12:32:44.799939 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:32:44.799945 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:32:44.799952 kernel: CPU features: detected: Spectre-v4 Dec 16 12:32:44.799958 kernel: CPU features: detected: Spectre-BHB Dec 16 12:32:44.799964 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:32:44.799971 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:32:44.799977 kernel: CPU features: detected: ARM erratum 1418040 Dec 16 12:32:44.799983 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:32:44.799990 kernel: alternatives: applying boot alternatives Dec 16 12:32:44.799997 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:32:44.800006 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:32:44.800012 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:32:44.800019 kernel: Fallback order for Node 0: 0 Dec 16 12:32:44.800025 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 16 12:32:44.800031 kernel: Policy zone: DMA Dec 16 12:32:44.800037 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:32:44.800044 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 16 12:32:44.800050 kernel: software IO TLB: area num 4. Dec 16 12:32:44.800056 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 16 12:32:44.800063 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Dec 16 12:32:44.800069 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 12:32:44.800077 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:32:44.800084 kernel: rcu: RCU event tracing is enabled. Dec 16 12:32:44.800090 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 12:32:44.800097 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:32:44.800104 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:32:44.800111 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:32:44.800117 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 12:32:44.800123 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:32:44.800130 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:32:44.800136 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:32:44.800143 kernel: GICv3: 256 SPIs implemented Dec 16 12:32:44.800150 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:32:44.800157 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:32:44.800163 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 16 12:32:44.800170 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 12:32:44.800176 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 16 12:32:44.800182 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 16 12:32:44.800189 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 16 12:32:44.800195 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 16 12:32:44.800202 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 16 12:32:44.800209 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 16 12:32:44.800215 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:32:44.800221 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:32:44.800229 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 16 12:32:44.800236 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 16 12:32:44.800243 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 16 12:32:44.800249 kernel: arm-pv: using stolen time PV Dec 16 12:32:44.800256 kernel: Console: colour dummy device 80x25 Dec 16 12:32:44.800262 kernel: ACPI: Core revision 20240827 Dec 16 12:32:44.800269 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 16 12:32:44.800276 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:32:44.800283 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:32:44.800289 kernel: landlock: Up and running. Dec 16 12:32:44.800298 kernel: SELinux: Initializing. Dec 16 12:32:44.800304 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:32:44.800311 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:32:44.800318 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:32:44.800324 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:32:44.800331 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:32:44.800338 kernel: Remapping and enabling EFI services. Dec 16 12:32:44.800344 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:32:44.800351 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:32:44.800364 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 16 12:32:44.800371 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 16 12:32:44.800378 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:32:44.800386 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 16 12:32:44.800393 kernel: Detected PIPT I-cache on CPU2 Dec 16 12:32:44.800400 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 16 12:32:44.800407 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 16 12:32:44.800414 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:32:44.800422 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 16 12:32:44.800429 kernel: Detected PIPT I-cache on CPU3 Dec 16 12:32:44.800437 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 16 12:32:44.800444 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 16 12:32:44.800451 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:32:44.800457 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 16 12:32:44.800465 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 12:32:44.800471 kernel: SMP: Total of 4 processors activated. Dec 16 12:32:44.800478 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:32:44.800486 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:32:44.800493 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:32:44.800500 kernel: CPU features: detected: Common not Private translations Dec 16 12:32:44.800507 kernel: CPU features: detected: CRC32 instructions Dec 16 12:32:44.800514 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 16 12:32:44.800521 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:32:44.800527 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:32:44.800534 kernel: CPU features: detected: Privileged Access Never Dec 16 12:32:44.800541 kernel: CPU features: detected: RAS Extension Support Dec 16 12:32:44.800549 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:32:44.800556 kernel: alternatives: applying system-wide alternatives Dec 16 12:32:44.800563 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 16 12:32:44.800570 kernel: Memory: 2423776K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 126176K reserved, 16384K cma-reserved) Dec 16 12:32:44.800577 kernel: devtmpfs: initialized Dec 16 12:32:44.800584 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:32:44.800593 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 12:32:44.800599 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:32:44.800606 kernel: 0 pages in range for non-PLT usage Dec 16 12:32:44.800615 kernel: 508400 pages in range for PLT usage Dec 16 12:32:44.800622 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:32:44.800628 kernel: SMBIOS 3.0.0 present. Dec 16 12:32:44.800635 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 16 12:32:44.800642 kernel: DMI: Memory slots populated: 1/1 Dec 16 12:32:44.800649 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:32:44.800656 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:32:44.800663 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:32:44.800669 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:32:44.800678 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:32:44.800685 kernel: audit: type=2000 audit(0.029:1): state=initialized audit_enabled=0 res=1 Dec 16 12:32:44.800692 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:32:44.800699 kernel: cpuidle: using governor menu Dec 16 12:32:44.800706 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:32:44.800713 kernel: ASID allocator initialised with 32768 entries Dec 16 12:32:44.800720 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:32:44.800727 kernel: Serial: AMBA PL011 UART driver Dec 16 12:32:44.800735 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:32:44.800743 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:32:44.800751 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:32:44.800758 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:32:44.800765 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:32:44.800785 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:32:44.800792 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:32:44.800799 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:32:44.800806 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:32:44.800813 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:32:44.800822 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:32:44.800829 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:32:44.800836 kernel: ACPI: Interpreter enabled Dec 16 12:32:44.800843 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:32:44.800850 kernel: ACPI: MCFG table detected, 1 entries Dec 16 12:32:44.800857 kernel: ACPI: CPU0 has been hot-added Dec 16 12:32:44.800863 kernel: ACPI: CPU1 has been hot-added Dec 16 12:32:44.800870 kernel: ACPI: CPU2 has been hot-added Dec 16 12:32:44.800877 kernel: ACPI: CPU3 has been hot-added Dec 16 12:32:44.800884 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:32:44.800892 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:32:44.800899 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 12:32:44.801049 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:32:44.801115 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 12:32:44.801175 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 12:32:44.801233 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 16 12:32:44.801290 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 16 12:32:44.801302 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 16 12:32:44.801309 kernel: PCI host bridge to bus 0000:00 Dec 16 12:32:44.801376 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 16 12:32:44.801430 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 12:32:44.801482 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 16 12:32:44.801533 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 12:32:44.801611 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:32:44.801688 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 12:32:44.801748 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 16 12:32:44.801879 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 16 12:32:44.801944 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 16 12:32:44.802003 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 16 12:32:44.802065 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 16 12:32:44.802129 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 16 12:32:44.802258 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 16 12:32:44.802313 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 12:32:44.802366 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 16 12:32:44.802376 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 12:32:44.802384 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 12:32:44.802391 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 12:32:44.802398 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 12:32:44.802407 kernel: iommu: Default domain type: Translated Dec 16 12:32:44.802414 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:32:44.802421 kernel: efivars: Registered efivars operations Dec 16 12:32:44.802428 kernel: vgaarb: loaded Dec 16 12:32:44.802435 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:32:44.802442 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:32:44.802449 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:32:44.802456 kernel: pnp: PnP ACPI init Dec 16 12:32:44.802524 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 16 12:32:44.802536 kernel: pnp: PnP ACPI: found 1 devices Dec 16 12:32:44.802543 kernel: NET: Registered PF_INET protocol family Dec 16 12:32:44.802550 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:32:44.802557 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:32:44.802564 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:32:44.802571 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:32:44.802578 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:32:44.802585 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:32:44.802594 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:32:44.802601 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:32:44.802608 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:32:44.802615 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:32:44.802622 kernel: kvm [1]: HYP mode not available Dec 16 12:32:44.802629 kernel: Initialise system trusted keyrings Dec 16 12:32:44.802636 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:32:44.802643 kernel: Key type asymmetric registered Dec 16 12:32:44.802650 kernel: Asymmetric key parser 'x509' registered Dec 16 12:32:44.802658 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:32:44.802666 kernel: io scheduler mq-deadline registered Dec 16 12:32:44.802672 kernel: io scheduler kyber registered Dec 16 12:32:44.802679 kernel: io scheduler bfq registered Dec 16 12:32:44.802687 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 12:32:44.802694 kernel: ACPI: button: Power Button [PWRB] Dec 16 12:32:44.802701 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 12:32:44.802762 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 16 12:32:44.802788 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:32:44.802799 kernel: thunder_xcv, ver 1.0 Dec 16 12:32:44.802806 kernel: thunder_bgx, ver 1.0 Dec 16 12:32:44.802813 kernel: nicpf, ver 1.0 Dec 16 12:32:44.802820 kernel: nicvf, ver 1.0 Dec 16 12:32:44.802899 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:32:44.802957 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:32:44 UTC (1765888364) Dec 16 12:32:44.802967 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:32:44.802974 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:32:44.802983 kernel: watchdog: NMI not fully supported Dec 16 12:32:44.802990 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:32:44.802997 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:32:44.803004 kernel: Segment Routing with IPv6 Dec 16 12:32:44.803011 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:32:44.803018 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:32:44.803025 kernel: Key type dns_resolver registered Dec 16 12:32:44.803032 kernel: registered taskstats version 1 Dec 16 12:32:44.803038 kernel: Loading compiled-in X.509 certificates Dec 16 12:32:44.803045 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 16 12:32:44.803054 kernel: Demotion targets for Node 0: null Dec 16 12:32:44.803061 kernel: Key type .fscrypt registered Dec 16 12:32:44.803068 kernel: Key type fscrypt-provisioning registered Dec 16 12:32:44.803075 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:32:44.803082 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:32:44.803089 kernel: ima: No architecture policies found Dec 16 12:32:44.803096 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:32:44.803103 kernel: clk: Disabling unused clocks Dec 16 12:32:44.803109 kernel: PM: genpd: Disabling unused power domains Dec 16 12:32:44.803118 kernel: Warning: unable to open an initial console. Dec 16 12:32:44.803125 kernel: Freeing unused kernel memory: 39552K Dec 16 12:32:44.803132 kernel: Run /init as init process Dec 16 12:32:44.803139 kernel: with arguments: Dec 16 12:32:44.803146 kernel: /init Dec 16 12:32:44.803153 kernel: with environment: Dec 16 12:32:44.803159 kernel: HOME=/ Dec 16 12:32:44.803166 kernel: TERM=linux Dec 16 12:32:44.803174 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:32:44.803186 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:32:44.803194 systemd[1]: Detected virtualization kvm. Dec 16 12:32:44.803201 systemd[1]: Detected architecture arm64. Dec 16 12:32:44.803208 systemd[1]: Running in initrd. Dec 16 12:32:44.803216 systemd[1]: No hostname configured, using default hostname. Dec 16 12:32:44.803223 systemd[1]: Hostname set to . Dec 16 12:32:44.803231 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:32:44.803240 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:32:44.803247 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:32:44.803255 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:32:44.803263 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:32:44.803270 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:32:44.803278 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:32:44.803286 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:32:44.803295 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 12:32:44.803303 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 12:32:44.803311 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:32:44.803318 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:32:44.803326 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:32:44.803336 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:32:44.803344 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:32:44.803351 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:32:44.803360 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:32:44.803367 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:32:44.803375 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:32:44.803382 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:32:44.803389 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:32:44.803397 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:32:44.803404 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:32:44.803412 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:32:44.803421 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:32:44.803429 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:32:44.803436 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:32:44.803444 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:32:44.803451 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:32:44.803459 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:32:44.803466 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:32:44.803474 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:32:44.803481 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:32:44.803491 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:32:44.803499 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:32:44.803506 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:32:44.803532 systemd-journald[244]: Collecting audit messages is disabled. Dec 16 12:32:44.803554 systemd-journald[244]: Journal started Dec 16 12:32:44.803572 systemd-journald[244]: Runtime Journal (/run/log/journal/4b1ad39f3c194fd6b8eb527a70969809) is 6M, max 48.5M, 42.4M free. Dec 16 12:32:44.791704 systemd-modules-load[246]: Inserted module 'overlay' Dec 16 12:32:44.807617 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:32:44.807641 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:32:44.809379 systemd-modules-load[246]: Inserted module 'br_netfilter' Dec 16 12:32:44.811179 kernel: Bridge firewalling registered Dec 16 12:32:44.811201 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:32:44.812459 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:32:44.813924 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:32:44.819754 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:32:44.821597 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:32:44.823639 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:32:44.833241 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:32:44.838606 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:32:44.842021 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:32:44.847820 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:32:44.848172 systemd-tmpfiles[267]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:32:44.850024 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:32:44.851162 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:32:44.871652 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:32:44.884231 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:32:44.900505 systemd-resolved[290]: Positive Trust Anchors: Dec 16 12:32:44.900525 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:32:44.900556 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:32:44.905411 systemd-resolved[290]: Defaulting to hostname 'linux'. Dec 16 12:32:44.906431 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:32:44.910089 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:32:44.970832 kernel: SCSI subsystem initialized Dec 16 12:32:44.975821 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:32:44.983833 kernel: iscsi: registered transport (tcp) Dec 16 12:32:44.997833 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:32:44.997897 kernel: QLogic iSCSI HBA Driver Dec 16 12:32:45.019255 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:32:45.036205 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:32:45.038507 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:32:45.091863 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:32:45.093857 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:32:45.155826 kernel: raid6: neonx8 gen() 15760 MB/s Dec 16 12:32:45.172809 kernel: raid6: neonx4 gen() 15785 MB/s Dec 16 12:32:45.189827 kernel: raid6: neonx2 gen() 13195 MB/s Dec 16 12:32:45.206820 kernel: raid6: neonx1 gen() 10398 MB/s Dec 16 12:32:45.223817 kernel: raid6: int64x8 gen() 6886 MB/s Dec 16 12:32:45.240812 kernel: raid6: int64x4 gen() 7333 MB/s Dec 16 12:32:45.257811 kernel: raid6: int64x2 gen() 6099 MB/s Dec 16 12:32:45.274926 kernel: raid6: int64x1 gen() 5043 MB/s Dec 16 12:32:45.274975 kernel: raid6: using algorithm neonx4 gen() 15785 MB/s Dec 16 12:32:45.292813 kernel: raid6: .... xor() 12348 MB/s, rmw enabled Dec 16 12:32:45.292867 kernel: raid6: using neon recovery algorithm Dec 16 12:32:45.297806 kernel: xor: measuring software checksum speed Dec 16 12:32:45.298952 kernel: 8regs : 18565 MB/sec Dec 16 12:32:45.298971 kernel: 32regs : 21699 MB/sec Dec 16 12:32:45.300093 kernel: arm64_neon : 28041 MB/sec Dec 16 12:32:45.300105 kernel: xor: using function: arm64_neon (28041 MB/sec) Dec 16 12:32:45.352836 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:32:45.360874 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:32:45.363601 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:32:45.393385 systemd-udevd[499]: Using default interface naming scheme 'v255'. Dec 16 12:32:45.398629 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:32:45.400743 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:32:45.432869 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Dec 16 12:32:45.458323 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:32:45.461058 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:32:45.516050 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:32:45.519220 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:32:45.578784 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:32:45.589963 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 16 12:32:45.590117 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 16 12:32:45.578915 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:32:45.594146 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:32:45.594171 kernel: GPT:9289727 != 19775487 Dec 16 12:32:45.594181 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:32:45.589180 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:32:45.598809 kernel: GPT:9289727 != 19775487 Dec 16 12:32:45.598831 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:32:45.598840 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:32:45.592290 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:32:45.622939 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 12:32:45.624945 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:32:45.634184 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 12:32:45.635789 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:32:45.655140 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:32:45.661515 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 12:32:45.662811 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 16 12:32:45.665207 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:32:45.668081 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:32:45.670161 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:32:45.673118 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:32:45.675091 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:32:45.710322 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:32:45.713650 disk-uuid[592]: Primary Header is updated. Dec 16 12:32:45.713650 disk-uuid[592]: Secondary Entries is updated. Dec 16 12:32:45.713650 disk-uuid[592]: Secondary Header is updated. Dec 16 12:32:45.719802 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:32:45.723919 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:32:46.725804 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:32:46.727075 disk-uuid[600]: The operation has completed successfully. Dec 16 12:32:46.752618 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:32:46.752715 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:32:46.777594 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 12:32:46.806944 sh[613]: Success Dec 16 12:32:46.819844 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:32:46.819908 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:32:46.819933 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:32:46.827951 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:32:46.852913 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:32:46.855684 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 12:32:46.871577 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 12:32:46.877788 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (625) Dec 16 12:32:46.880369 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 16 12:32:46.880426 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:32:46.884796 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:32:46.884839 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:32:46.885832 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 12:32:46.887549 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:32:46.888859 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:32:46.889655 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:32:46.891275 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:32:46.915676 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (654) Dec 16 12:32:46.915737 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:32:46.915748 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:32:46.919881 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:32:46.919946 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:32:46.924790 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:32:46.925920 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:32:46.928192 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:32:46.997659 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:32:47.000647 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:32:47.042815 ignition[699]: Ignition 2.22.0 Dec 16 12:32:47.042828 ignition[699]: Stage: fetch-offline Dec 16 12:32:47.042866 ignition[699]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:32:47.042874 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:32:47.042970 ignition[699]: parsed url from cmdline: "" Dec 16 12:32:47.042975 ignition[699]: no config URL provided Dec 16 12:32:47.042980 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:32:47.042987 ignition[699]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:32:47.043013 ignition[699]: op(1): [started] loading QEMU firmware config module Dec 16 12:32:47.043017 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 12:32:47.053624 ignition[699]: op(1): [finished] loading QEMU firmware config module Dec 16 12:32:47.056891 systemd-networkd[806]: lo: Link UP Dec 16 12:32:47.056899 systemd-networkd[806]: lo: Gained carrier Dec 16 12:32:47.057627 systemd-networkd[806]: Enumeration completed Dec 16 12:32:47.057747 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:32:47.058320 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:32:47.058324 systemd-networkd[806]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:32:47.058915 systemd-networkd[806]: eth0: Link UP Dec 16 12:32:47.059221 systemd-networkd[806]: eth0: Gained carrier Dec 16 12:32:47.059232 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:32:47.059348 systemd[1]: Reached target network.target - Network. Dec 16 12:32:47.093862 systemd-networkd[806]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:32:47.106543 ignition[699]: parsing config with SHA512: 23d2dec11261c1d12a060776ae1c5713b5e8933f3358c1f02f44e11174ec9cd15da7d35ce8617f128d05e3a745fd6dc3936f5895027444f079f24a44eb09e476 Dec 16 12:32:47.113200 unknown[699]: fetched base config from "system" Dec 16 12:32:47.113209 unknown[699]: fetched user config from "qemu" Dec 16 12:32:47.113647 ignition[699]: fetch-offline: fetch-offline passed Dec 16 12:32:47.115542 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:32:47.113725 ignition[699]: Ignition finished successfully Dec 16 12:32:47.117078 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 12:32:47.117945 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:32:47.154936 ignition[814]: Ignition 2.22.0 Dec 16 12:32:47.154955 ignition[814]: Stage: kargs Dec 16 12:32:47.155110 ignition[814]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:32:47.155119 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:32:47.155967 ignition[814]: kargs: kargs passed Dec 16 12:32:47.156017 ignition[814]: Ignition finished successfully Dec 16 12:32:47.158711 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:32:47.161407 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:32:47.205578 ignition[822]: Ignition 2.22.0 Dec 16 12:32:47.205594 ignition[822]: Stage: disks Dec 16 12:32:47.205727 ignition[822]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:32:47.205736 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:32:47.208985 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:32:47.206509 ignition[822]: disks: disks passed Dec 16 12:32:47.210038 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:32:47.206552 ignition[822]: Ignition finished successfully Dec 16 12:32:47.211734 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:32:47.213638 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:32:47.214967 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:32:47.216680 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:32:47.219063 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:32:47.242700 systemd-fsck[832]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 12:32:47.249855 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:32:47.253973 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:32:47.323802 kernel: EXT4-fs (vda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 16 12:32:47.324291 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:32:47.325610 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:32:47.328175 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:32:47.330011 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:32:47.330991 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:32:47.331041 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:32:47.331067 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:32:47.350706 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:32:47.353573 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:32:47.357985 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (840) Dec 16 12:32:47.358008 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:32:47.358018 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:32:47.361795 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:32:47.361937 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:32:47.363806 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:32:47.394578 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:32:47.399744 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:32:47.404211 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:32:47.408646 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:32:47.488835 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:32:47.491340 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:32:47.494029 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:32:47.512819 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:32:47.530925 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:32:47.547952 ignition[953]: INFO : Ignition 2.22.0 Dec 16 12:32:47.547952 ignition[953]: INFO : Stage: mount Dec 16 12:32:47.549711 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:32:47.549711 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:32:47.549711 ignition[953]: INFO : mount: mount passed Dec 16 12:32:47.549711 ignition[953]: INFO : Ignition finished successfully Dec 16 12:32:47.552122 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:32:47.554742 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:32:47.877204 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:32:47.878720 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:32:47.909140 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Dec 16 12:32:47.909189 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:32:47.909200 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:32:47.912792 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:32:47.912835 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:32:47.913867 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:32:47.945455 ignition[983]: INFO : Ignition 2.22.0 Dec 16 12:32:47.945455 ignition[983]: INFO : Stage: files Dec 16 12:32:47.947141 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:32:47.947141 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:32:47.947141 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:32:47.950266 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:32:47.950266 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:32:47.952852 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:32:47.954072 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:32:47.954072 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:32:47.953379 unknown[983]: wrote ssh authorized keys file for user: core Dec 16 12:32:47.957725 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:32:47.957725 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 12:32:47.991417 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:32:48.125441 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:32:48.125441 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 12:32:48.129013 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 16 12:32:48.168913 systemd-networkd[806]: eth0: Gained IPv6LL Dec 16 12:32:48.346397 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 16 12:32:48.406260 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 12:32:48.406260 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:32:48.410218 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:32:48.410218 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:32:48.410218 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:32:48.410218 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:32:48.410218 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:32:48.410218 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:32:48.410218 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:32:48.423470 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:32:48.423470 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:32:48.423470 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:32:48.423470 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:32:48.423470 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:32:48.423470 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 16 12:32:48.672654 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 16 12:32:48.861590 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:32:48.861590 ignition[983]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 12:32:48.865142 ignition[983]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:32:48.937300 ignition[983]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:32:48.937300 ignition[983]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 12:32:48.937300 ignition[983]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 16 12:32:48.937300 ignition[983]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:32:48.944086 ignition[983]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:32:48.944086 ignition[983]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 16 12:32:48.944086 ignition[983]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 12:32:48.953037 ignition[983]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:32:48.956200 ignition[983]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:32:48.957671 ignition[983]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 12:32:48.957671 ignition[983]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:32:48.957671 ignition[983]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:32:48.957671 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:32:48.957671 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:32:48.957671 ignition[983]: INFO : files: files passed Dec 16 12:32:48.957671 ignition[983]: INFO : Ignition finished successfully Dec 16 12:32:48.959782 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:32:48.966374 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:32:48.989579 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:32:48.993266 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:32:48.995466 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:32:49.003027 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 12:32:49.006473 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:32:49.006473 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:32:49.009636 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:32:49.013806 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:32:49.015116 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:32:49.017648 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:32:49.062523 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:32:49.062626 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:32:49.066096 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:32:49.067360 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:32:49.069037 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:32:49.071011 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:32:49.097848 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:32:49.100437 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:32:49.124926 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:32:49.126106 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:32:49.128004 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:32:49.129643 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:32:49.129787 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:32:49.132202 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:32:49.134070 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:32:49.135638 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:32:49.137283 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:32:49.139099 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:32:49.140903 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:32:49.142682 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:32:49.144481 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:32:49.146348 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:32:49.148408 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:32:49.150349 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:32:49.151698 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:32:49.151851 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:32:49.154163 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:32:49.155883 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:32:49.157871 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:32:49.157981 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:32:49.160133 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:32:49.160253 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:32:49.163653 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:32:49.163800 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:32:49.165744 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:32:49.167374 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:32:49.167554 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:32:49.169532 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:32:49.171018 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:32:49.172619 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:32:49.172698 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:32:49.174696 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:32:49.174790 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:32:49.176313 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:32:49.176426 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:32:49.177978 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:32:49.178080 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:32:49.180331 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:32:49.182280 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:32:49.183213 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:32:49.183330 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:32:49.185172 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:32:49.185263 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:32:49.190362 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:32:49.193960 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:32:49.205946 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:32:49.220420 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:32:49.221494 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:32:49.225453 ignition[1038]: INFO : Ignition 2.22.0 Dec 16 12:32:49.226459 ignition[1038]: INFO : Stage: umount Dec 16 12:32:49.227829 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:32:49.227829 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:32:49.229883 ignition[1038]: INFO : umount: umount passed Dec 16 12:32:49.229883 ignition[1038]: INFO : Ignition finished successfully Dec 16 12:32:49.231044 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:32:49.231163 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:32:49.233823 systemd[1]: Stopped target network.target - Network. Dec 16 12:32:49.234689 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:32:49.234761 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:32:49.236268 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:32:49.236315 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:32:49.237819 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:32:49.237869 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:32:49.239528 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:32:49.239570 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:32:49.241159 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:32:49.241209 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:32:49.242920 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:32:49.244597 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:32:49.248298 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:32:49.248390 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:32:49.251696 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 12:32:49.252722 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:32:49.252792 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:32:49.255669 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:32:49.256544 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:32:49.256642 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:32:49.259445 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 12:32:49.259519 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:32:49.260759 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:32:49.260807 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:32:49.263510 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:32:49.264480 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:32:49.264534 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:32:49.266826 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:32:49.266873 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:32:49.270948 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:32:49.270992 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:32:49.273944 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:32:49.278107 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 12:32:49.292345 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:32:49.296967 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:32:49.299505 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:32:49.299590 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:32:49.303123 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:32:49.303182 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:32:49.304794 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:32:49.304825 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:32:49.306647 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:32:49.306692 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:32:49.309281 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:32:49.309324 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:32:49.311866 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:32:49.311926 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:32:49.315259 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:32:49.316298 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:32:49.316353 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:32:49.319278 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:32:49.319321 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:32:49.322437 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 12:32:49.322479 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:32:49.325741 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:32:49.325814 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:32:49.327961 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:32:49.328005 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:32:49.338038 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:32:49.338852 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:32:49.340190 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:32:49.342495 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:32:49.360450 systemd[1]: Switching root. Dec 16 12:32:49.402250 systemd-journald[244]: Journal stopped Dec 16 12:32:50.236176 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Dec 16 12:32:50.236232 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:32:50.236246 kernel: SELinux: policy capability open_perms=1 Dec 16 12:32:50.236258 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:32:50.236270 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:32:50.236282 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:32:50.236291 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:32:50.236301 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:32:50.236310 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:32:50.236320 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:32:50.236331 systemd[1]: Successfully loaded SELinux policy in 65.735ms. Dec 16 12:32:50.236350 kernel: audit: type=1403 audit(1765888369.623:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 12:32:50.236360 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.436ms. Dec 16 12:32:50.236373 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:32:50.236384 systemd[1]: Detected virtualization kvm. Dec 16 12:32:50.236398 systemd[1]: Detected architecture arm64. Dec 16 12:32:50.236408 systemd[1]: Detected first boot. Dec 16 12:32:50.236418 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:32:50.236428 zram_generator::config[1087]: No configuration found. Dec 16 12:32:50.236440 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:32:50.236449 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:32:50.236461 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 12:32:50.236471 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:32:50.236481 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:32:50.236491 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:32:50.236501 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:32:50.236512 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:32:50.236522 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:32:50.236532 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:32:50.236543 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:32:50.236970 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:32:50.237011 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:32:50.237022 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:32:50.237160 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:32:50.237182 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:32:50.237193 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:32:50.237204 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:32:50.237215 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:32:50.237232 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:32:50.237242 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:32:50.237253 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:32:50.237264 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:32:50.237275 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:32:50.237286 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:32:50.237331 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:32:50.237343 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:32:50.237356 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:32:50.237368 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:32:50.237379 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:32:50.237389 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:32:50.237400 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:32:50.237411 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:32:50.237421 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:32:50.237432 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:32:50.237442 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:32:50.237452 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:32:50.237464 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:32:50.237475 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:32:50.237485 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:32:50.237495 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:32:50.237506 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:32:50.237517 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:32:50.237528 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:32:50.237539 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:32:50.237553 systemd[1]: Reached target machines.target - Containers. Dec 16 12:32:50.237564 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:32:50.237574 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:32:50.237585 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:32:50.237596 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:32:50.237606 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:32:50.237631 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:32:50.237645 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:32:50.237656 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:32:50.237668 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:32:50.237679 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:32:50.237690 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:32:50.237701 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:32:50.237711 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:32:50.237721 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:32:50.237732 kernel: fuse: init (API version 7.41) Dec 16 12:32:50.237742 kernel: loop: module loaded Dec 16 12:32:50.237762 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:32:50.237787 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:32:50.237800 kernel: ACPI: bus type drm_connector registered Dec 16 12:32:50.237810 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:32:50.237821 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:32:50.237831 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:32:50.237842 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:32:50.237852 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:32:50.237866 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 12:32:50.237876 systemd[1]: Stopped verity-setup.service. Dec 16 12:32:50.237886 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:32:50.237896 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:32:50.237940 systemd-journald[1155]: Collecting audit messages is disabled. Dec 16 12:32:50.237964 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:32:50.237974 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:32:50.237986 systemd-journald[1155]: Journal started Dec 16 12:32:50.238010 systemd-journald[1155]: Runtime Journal (/run/log/journal/4b1ad39f3c194fd6b8eb527a70969809) is 6M, max 48.5M, 42.4M free. Dec 16 12:32:50.004545 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:32:50.021912 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 12:32:50.022333 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:32:50.241202 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:32:50.241879 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:32:50.243063 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:32:50.244377 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:32:50.245949 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:32:50.247524 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:32:50.247705 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:32:50.249269 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:32:50.249439 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:32:50.250924 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:32:50.251089 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:32:50.252625 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:32:50.252840 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:32:50.254565 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:32:50.254724 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:32:50.256180 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:32:50.256355 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:32:50.257812 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:32:50.259453 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:32:50.261090 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:32:50.262668 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:32:50.274710 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:32:50.277306 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:32:50.279350 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:32:50.280631 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:32:50.280667 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:32:50.282551 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:32:50.288694 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:32:50.289929 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:32:50.291062 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:32:50.293473 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:32:50.294947 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:32:50.297340 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:32:50.298584 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:32:50.300060 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:32:50.303038 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:32:50.306904 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:32:50.315563 systemd-journald[1155]: Time spent on flushing to /var/log/journal/4b1ad39f3c194fd6b8eb527a70969809 is 22.178ms for 890 entries. Dec 16 12:32:50.315563 systemd-journald[1155]: System Journal (/var/log/journal/4b1ad39f3c194fd6b8eb527a70969809) is 8M, max 195.6M, 187.6M free. Dec 16 12:32:50.352434 systemd-journald[1155]: Received client request to flush runtime journal. Dec 16 12:32:50.352498 kernel: loop0: detected capacity change from 0 to 211168 Dec 16 12:32:50.311232 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:32:50.312936 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:32:50.314207 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:32:50.318989 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:32:50.323327 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:32:50.327987 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:32:50.341952 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:32:50.349232 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Dec 16 12:32:50.349242 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Dec 16 12:32:50.354386 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:32:50.356546 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:32:50.363997 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:32:50.366834 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:32:50.367911 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:32:50.369768 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:32:50.382821 kernel: loop1: detected capacity change from 0 to 119840 Dec 16 12:32:50.401963 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:32:50.404844 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:32:50.411792 kernel: loop2: detected capacity change from 0 to 100632 Dec 16 12:32:50.431816 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Dec 16 12:32:50.432142 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Dec 16 12:32:50.435280 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:32:50.438805 kernel: loop3: detected capacity change from 0 to 211168 Dec 16 12:32:50.444789 kernel: loop4: detected capacity change from 0 to 119840 Dec 16 12:32:50.450793 kernel: loop5: detected capacity change from 0 to 100632 Dec 16 12:32:50.455699 (sd-merge)[1228]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 16 12:32:50.456122 (sd-merge)[1228]: Merged extensions into '/usr'. Dec 16 12:32:50.459917 systemd[1]: Reload requested from client PID 1203 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:32:50.459935 systemd[1]: Reloading... Dec 16 12:32:50.500158 zram_generator::config[1251]: No configuration found. Dec 16 12:32:50.596697 ldconfig[1198]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:32:50.664682 systemd[1]: Reloading finished in 204 ms. Dec 16 12:32:50.688480 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:32:50.690076 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:32:50.701981 systemd[1]: Starting ensure-sysext.service... Dec 16 12:32:50.703677 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:32:50.720833 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:32:50.720849 systemd[1]: Reloading... Dec 16 12:32:50.724513 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:32:50.724860 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:32:50.725174 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:32:50.725457 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 12:32:50.726195 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 12:32:50.726504 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Dec 16 12:32:50.726677 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Dec 16 12:32:50.729622 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:32:50.729714 systemd-tmpfiles[1289]: Skipping /boot Dec 16 12:32:50.735737 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:32:50.735876 systemd-tmpfiles[1289]: Skipping /boot Dec 16 12:32:50.766800 zram_generator::config[1316]: No configuration found. Dec 16 12:32:50.903549 systemd[1]: Reloading finished in 182 ms. Dec 16 12:32:50.914812 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:32:50.920400 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:32:50.929820 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:32:50.932272 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:32:50.934544 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:32:50.938760 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:32:50.942973 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:32:50.945916 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:32:50.961471 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:32:50.966401 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:32:50.968683 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:32:50.971873 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:32:50.974178 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:32:50.980893 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:32:50.982053 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:32:50.982255 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:32:50.984162 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:32:50.988501 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:32:50.988912 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:32:50.991347 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:32:50.991518 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:32:50.993003 augenrules[1383]: No rules Dec 16 12:32:50.994031 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:32:50.994258 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:32:50.999476 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Dec 16 12:32:51.006941 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:32:51.009288 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:32:51.009452 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:32:51.015855 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:32:51.019120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:32:51.021240 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:32:51.025737 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:32:51.028290 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:32:51.029326 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:32:51.029460 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:32:51.031767 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:32:51.034036 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:32:51.046020 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:32:51.048121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:32:51.048305 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:32:51.049969 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:32:51.050162 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:32:51.051769 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:32:51.051968 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:32:51.073404 systemd[1]: Finished ensure-sysext.service. Dec 16 12:32:51.079256 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:32:51.080445 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:32:51.081938 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:32:51.083954 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:32:51.093484 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:32:51.097045 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:32:51.099056 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:32:51.099104 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:32:51.100559 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:32:51.113154 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 12:32:51.114276 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:32:51.114969 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:32:51.115133 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:32:51.118561 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:32:51.118806 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:32:51.120247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:32:51.120514 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:32:51.123113 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:32:51.123283 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:32:51.123384 augenrules[1434]: /sbin/augenrules: No change Dec 16 12:32:51.132204 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:32:51.132277 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:32:51.134518 augenrules[1462]: No rules Dec 16 12:32:51.136169 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:32:51.140056 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:32:51.142141 systemd-resolved[1356]: Positive Trust Anchors: Dec 16 12:32:51.142156 systemd-resolved[1356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:32:51.142188 systemd-resolved[1356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:32:51.148355 systemd-resolved[1356]: Defaulting to hostname 'linux'. Dec 16 12:32:51.151596 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:32:51.153597 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:32:51.158063 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:32:51.204765 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:32:51.207556 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:32:51.214074 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 12:32:51.215413 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:32:51.215634 systemd-networkd[1443]: lo: Link UP Dec 16 12:32:51.215647 systemd-networkd[1443]: lo: Gained carrier Dec 16 12:32:51.216438 systemd-networkd[1443]: Enumeration completed Dec 16 12:32:51.216874 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:32:51.216884 systemd-networkd[1443]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:32:51.217400 systemd-networkd[1443]: eth0: Link UP Dec 16 12:32:51.217512 systemd-networkd[1443]: eth0: Gained carrier Dec 16 12:32:51.217531 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:32:51.217552 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:32:51.218735 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:32:51.219867 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:32:51.221002 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:32:51.221033 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:32:51.221854 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:32:51.222898 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:32:51.223952 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:32:51.225182 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:32:51.226955 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:32:51.229528 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:32:51.231923 systemd-networkd[1443]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:32:51.232448 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:32:51.233853 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:32:51.234002 systemd-timesyncd[1446]: Network configuration changed, trying to establish connection. Dec 16 12:32:51.234927 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:32:51.236128 systemd-timesyncd[1446]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 12:32:51.236224 systemd-timesyncd[1446]: Initial clock synchronization to Tue 2025-12-16 12:32:51.110946 UTC. Dec 16 12:32:51.237906 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:32:51.239143 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:32:51.240893 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:32:51.244426 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:32:51.245933 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:32:51.247435 systemd[1]: Reached target network.target - Network. Dec 16 12:32:51.248394 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:32:51.249428 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:32:51.250341 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:32:51.250378 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:32:51.253897 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:32:51.258032 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:32:51.261015 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:32:51.263974 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:32:51.273163 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:32:51.274137 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:32:51.276052 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:32:51.279407 jq[1498]: false Dec 16 12:32:51.279937 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:32:51.283944 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:32:51.288965 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:32:51.293149 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:32:51.298907 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:32:51.301470 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:32:51.303754 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:32:51.304240 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:32:51.305414 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:32:51.308873 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:32:51.312223 extend-filesystems[1500]: Found /dev/vda6 Dec 16 12:32:51.313019 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:32:51.317571 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:32:51.320144 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:32:51.321900 extend-filesystems[1500]: Found /dev/vda9 Dec 16 12:32:51.320572 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:32:51.320759 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:32:51.323718 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:32:51.325819 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:32:51.329248 jq[1520]: true Dec 16 12:32:51.332961 extend-filesystems[1500]: Checking size of /dev/vda9 Dec 16 12:32:51.343784 (ntainerd)[1525]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 12:32:51.349322 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:32:51.349785 update_engine[1519]: I20251216 12:32:51.349300 1519 main.cc:92] Flatcar Update Engine starting Dec 16 12:32:51.356650 jq[1528]: true Dec 16 12:32:51.357863 extend-filesystems[1500]: Resized partition /dev/vda9 Dec 16 12:32:51.361173 tar[1524]: linux-arm64/LICENSE Dec 16 12:32:51.361621 tar[1524]: linux-arm64/helm Dec 16 12:32:51.365583 extend-filesystems[1542]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:32:51.370017 dbus-daemon[1495]: [system] SELinux support is enabled Dec 16 12:32:51.370485 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:32:51.377150 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:32:51.379874 update_engine[1519]: I20251216 12:32:51.379822 1519 update_check_scheduler.cc:74] Next update check in 11m4s Dec 16 12:32:51.380841 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 16 12:32:51.381442 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:32:51.381488 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:32:51.383019 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:32:51.383043 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:32:51.384722 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:32:51.394587 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:32:51.437817 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 16 12:32:51.456723 extend-filesystems[1542]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 12:32:51.456723 extend-filesystems[1542]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 12:32:51.456723 extend-filesystems[1542]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 16 12:32:51.469729 extend-filesystems[1500]: Resized filesystem in /dev/vda9 Dec 16 12:32:51.471380 bash[1562]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:32:51.459251 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:32:51.459529 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:32:51.470929 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:32:51.472950 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:32:51.477144 locksmithd[1547]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:32:51.477763 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:32:51.511765 systemd-logind[1509]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 12:32:51.512487 systemd-logind[1509]: New seat seat0. Dec 16 12:32:51.513447 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:32:51.551908 containerd[1525]: time="2025-12-16T12:32:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:32:51.552815 containerd[1525]: time="2025-12-16T12:32:51.552728520Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 12:32:51.562482 containerd[1525]: time="2025-12-16T12:32:51.562440680Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.6µs" Dec 16 12:32:51.563512 containerd[1525]: time="2025-12-16T12:32:51.562594920Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:32:51.563512 containerd[1525]: time="2025-12-16T12:32:51.562623680Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:32:51.563512 containerd[1525]: time="2025-12-16T12:32:51.562817440Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:32:51.563512 containerd[1525]: time="2025-12-16T12:32:51.562836920Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:32:51.563512 containerd[1525]: time="2025-12-16T12:32:51.562861680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:32:51.563512 containerd[1525]: time="2025-12-16T12:32:51.562913560Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:32:51.563512 containerd[1525]: time="2025-12-16T12:32:51.562924520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:32:51.563512 containerd[1525]: time="2025-12-16T12:32:51.563137640Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:32:51.563512 containerd[1525]: time="2025-12-16T12:32:51.563153560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:32:51.563512 containerd[1525]: time="2025-12-16T12:32:51.563164080Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:32:51.563512 containerd[1525]: time="2025-12-16T12:32:51.563171680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:32:51.563512 containerd[1525]: time="2025-12-16T12:32:51.563267760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:32:51.563752 containerd[1525]: time="2025-12-16T12:32:51.563438320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:32:51.563752 containerd[1525]: time="2025-12-16T12:32:51.563464960Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:32:51.563752 containerd[1525]: time="2025-12-16T12:32:51.563475560Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:32:51.563879 containerd[1525]: time="2025-12-16T12:32:51.563858520Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:32:51.564220 containerd[1525]: time="2025-12-16T12:32:51.564200240Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:32:51.564336 containerd[1525]: time="2025-12-16T12:32:51.564319960Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:32:51.568353 containerd[1525]: time="2025-12-16T12:32:51.568323880Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:32:51.568486 containerd[1525]: time="2025-12-16T12:32:51.568467880Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:32:51.568728 containerd[1525]: time="2025-12-16T12:32:51.568707720Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:32:51.568823 containerd[1525]: time="2025-12-16T12:32:51.568806320Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:32:51.568884 containerd[1525]: time="2025-12-16T12:32:51.568871920Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:32:51.568934 containerd[1525]: time="2025-12-16T12:32:51.568921520Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:32:51.569006 containerd[1525]: time="2025-12-16T12:32:51.568991560Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:32:51.569056 containerd[1525]: time="2025-12-16T12:32:51.569044880Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:32:51.569106 containerd[1525]: time="2025-12-16T12:32:51.569094560Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:32:51.569154 containerd[1525]: time="2025-12-16T12:32:51.569143160Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:32:51.569199 containerd[1525]: time="2025-12-16T12:32:51.569188640Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:32:51.569248 containerd[1525]: time="2025-12-16T12:32:51.569237080Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:32:51.569423 containerd[1525]: time="2025-12-16T12:32:51.569402760Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:32:51.569502 containerd[1525]: time="2025-12-16T12:32:51.569487360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:32:51.569553 containerd[1525]: time="2025-12-16T12:32:51.569542280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:32:51.569610 containerd[1525]: time="2025-12-16T12:32:51.569597720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:32:51.569656 containerd[1525]: time="2025-12-16T12:32:51.569645640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:32:51.569703 containerd[1525]: time="2025-12-16T12:32:51.569692120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:32:51.569788 containerd[1525]: time="2025-12-16T12:32:51.569757000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:32:51.569863 containerd[1525]: time="2025-12-16T12:32:51.569848600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:32:51.569913 containerd[1525]: time="2025-12-16T12:32:51.569902720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:32:51.569962 containerd[1525]: time="2025-12-16T12:32:51.569950480Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:32:51.570011 containerd[1525]: time="2025-12-16T12:32:51.569999440Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:32:51.570259 containerd[1525]: time="2025-12-16T12:32:51.570240240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:32:51.570319 containerd[1525]: time="2025-12-16T12:32:51.570307280Z" level=info msg="Start snapshots syncer" Dec 16 12:32:51.570388 containerd[1525]: time="2025-12-16T12:32:51.570374640Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:32:51.570725 containerd[1525]: time="2025-12-16T12:32:51.570685960Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:32:51.570913 containerd[1525]: time="2025-12-16T12:32:51.570893400Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:32:51.571057 containerd[1525]: time="2025-12-16T12:32:51.571037440Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:32:51.571223 containerd[1525]: time="2025-12-16T12:32:51.571202520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:32:51.571294 containerd[1525]: time="2025-12-16T12:32:51.571280320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:32:51.571343 containerd[1525]: time="2025-12-16T12:32:51.571332280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:32:51.571395 containerd[1525]: time="2025-12-16T12:32:51.571382720Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:32:51.571446 containerd[1525]: time="2025-12-16T12:32:51.571433360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:32:51.571494 containerd[1525]: time="2025-12-16T12:32:51.571482440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:32:51.571555 containerd[1525]: time="2025-12-16T12:32:51.571542920Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:32:51.571626 containerd[1525]: time="2025-12-16T12:32:51.571612840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:32:51.571686 containerd[1525]: time="2025-12-16T12:32:51.571673720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:32:51.571736 containerd[1525]: time="2025-12-16T12:32:51.571724360Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:32:51.571843 containerd[1525]: time="2025-12-16T12:32:51.571827120Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:32:51.571905 containerd[1525]: time="2025-12-16T12:32:51.571890560Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:32:51.571969 containerd[1525]: time="2025-12-16T12:32:51.571955480Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:32:51.572016 containerd[1525]: time="2025-12-16T12:32:51.572004600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:32:51.572059 containerd[1525]: time="2025-12-16T12:32:51.572047720Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:32:51.572104 containerd[1525]: time="2025-12-16T12:32:51.572092200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:32:51.572152 containerd[1525]: time="2025-12-16T12:32:51.572138840Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:32:51.572264 containerd[1525]: time="2025-12-16T12:32:51.572253800Z" level=info msg="runtime interface created" Dec 16 12:32:51.572303 containerd[1525]: time="2025-12-16T12:32:51.572293640Z" level=info msg="created NRI interface" Dec 16 12:32:51.572363 containerd[1525]: time="2025-12-16T12:32:51.572351200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:32:51.572411 containerd[1525]: time="2025-12-16T12:32:51.572400520Z" level=info msg="Connect containerd service" Dec 16 12:32:51.572477 containerd[1525]: time="2025-12-16T12:32:51.572465320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:32:51.573250 containerd[1525]: time="2025-12-16T12:32:51.573218000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:32:51.651429 containerd[1525]: time="2025-12-16T12:32:51.651377680Z" level=info msg="Start subscribing containerd event" Dec 16 12:32:51.651668 containerd[1525]: time="2025-12-16T12:32:51.651642120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:32:51.651706 containerd[1525]: time="2025-12-16T12:32:51.651695000Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:32:51.651832 containerd[1525]: time="2025-12-16T12:32:51.651800400Z" level=info msg="Start recovering state" Dec 16 12:32:51.652143 containerd[1525]: time="2025-12-16T12:32:51.652122400Z" level=info msg="Start event monitor" Dec 16 12:32:51.652535 containerd[1525]: time="2025-12-16T12:32:51.652200000Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:32:51.652535 containerd[1525]: time="2025-12-16T12:32:51.652215240Z" level=info msg="Start streaming server" Dec 16 12:32:51.652535 containerd[1525]: time="2025-12-16T12:32:51.652224880Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:32:51.652535 containerd[1525]: time="2025-12-16T12:32:51.652232960Z" level=info msg="runtime interface starting up..." Dec 16 12:32:51.652535 containerd[1525]: time="2025-12-16T12:32:51.652238880Z" level=info msg="starting plugins..." Dec 16 12:32:51.652535 containerd[1525]: time="2025-12-16T12:32:51.652255960Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:32:51.652535 containerd[1525]: time="2025-12-16T12:32:51.652390840Z" level=info msg="containerd successfully booted in 0.100865s" Dec 16 12:32:51.652496 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:32:51.723488 tar[1524]: linux-arm64/README.md Dec 16 12:32:51.739081 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:32:52.175538 sshd_keygen[1514]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:32:52.195024 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:32:52.197637 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:32:52.223337 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:32:52.223561 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:32:52.226163 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:32:52.258913 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:32:52.261623 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:32:52.263816 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:32:52.265158 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:32:52.265558 systemd-networkd[1443]: eth0: Gained IPv6LL Dec 16 12:32:52.267756 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:32:52.269333 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:32:52.271617 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 12:32:52.274102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:32:52.286467 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:32:52.300556 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 12:32:52.300875 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 12:32:52.302868 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:32:52.306363 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:32:52.885991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:32:52.887758 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:32:52.891631 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:32:52.892857 systemd[1]: Startup finished in 2.114s (kernel) + 5.001s (initrd) + 3.335s (userspace) = 10.451s. Dec 16 12:32:53.283841 kubelet[1639]: E1216 12:32:53.283708 1639 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:32:53.286315 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:32:53.286451 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:32:53.287876 systemd[1]: kubelet.service: Consumed 757ms CPU time, 257.6M memory peak. Dec 16 12:32:57.761212 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:32:57.763084 systemd[1]: Started sshd@0-10.0.0.83:22-10.0.0.1:45224.service - OpenSSH per-connection server daemon (10.0.0.1:45224). Dec 16 12:32:57.849286 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 45224 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:32:57.851540 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:57.858246 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:32:57.859204 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:32:57.866110 systemd-logind[1509]: New session 1 of user core. Dec 16 12:32:57.886941 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:32:57.893231 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:32:57.909220 (systemd)[1657]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:32:57.913248 systemd-logind[1509]: New session c1 of user core. Dec 16 12:32:58.035309 systemd[1657]: Queued start job for default target default.target. Dec 16 12:32:58.054834 systemd[1657]: Created slice app.slice - User Application Slice. Dec 16 12:32:58.054862 systemd[1657]: Reached target paths.target - Paths. Dec 16 12:32:58.054901 systemd[1657]: Reached target timers.target - Timers. Dec 16 12:32:58.056877 systemd[1657]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:32:58.066842 systemd[1657]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:32:58.067003 systemd[1657]: Reached target sockets.target - Sockets. Dec 16 12:32:58.067051 systemd[1657]: Reached target basic.target - Basic System. Dec 16 12:32:58.067083 systemd[1657]: Reached target default.target - Main User Target. Dec 16 12:32:58.067107 systemd[1657]: Startup finished in 146ms. Dec 16 12:32:58.067225 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:32:58.068597 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:32:58.132223 systemd[1]: Started sshd@1-10.0.0.83:22-10.0.0.1:45240.service - OpenSSH per-connection server daemon (10.0.0.1:45240). Dec 16 12:32:58.204247 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 45240 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:32:58.205755 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:58.211089 systemd-logind[1509]: New session 2 of user core. Dec 16 12:32:58.222019 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:32:58.275159 sshd[1671]: Connection closed by 10.0.0.1 port 45240 Dec 16 12:32:58.275657 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:58.292199 systemd[1]: sshd@1-10.0.0.83:22-10.0.0.1:45240.service: Deactivated successfully. Dec 16 12:32:58.295468 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 12:32:58.296162 systemd-logind[1509]: Session 2 logged out. Waiting for processes to exit. Dec 16 12:32:58.300033 systemd[1]: Started sshd@2-10.0.0.83:22-10.0.0.1:45250.service - OpenSSH per-connection server daemon (10.0.0.1:45250). Dec 16 12:32:58.301619 systemd-logind[1509]: Removed session 2. Dec 16 12:32:58.358282 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 45250 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:32:58.359700 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:58.365154 systemd-logind[1509]: New session 3 of user core. Dec 16 12:32:58.379991 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:32:58.428479 sshd[1680]: Connection closed by 10.0.0.1 port 45250 Dec 16 12:32:58.428344 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:58.442909 systemd[1]: sshd@2-10.0.0.83:22-10.0.0.1:45250.service: Deactivated successfully. Dec 16 12:32:58.444442 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:32:58.446374 systemd-logind[1509]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:32:58.450153 systemd[1]: Started sshd@3-10.0.0.83:22-10.0.0.1:45264.service - OpenSSH per-connection server daemon (10.0.0.1:45264). Dec 16 12:32:58.450640 systemd-logind[1509]: Removed session 3. Dec 16 12:32:58.520997 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 45264 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:32:58.522271 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:58.526943 systemd-logind[1509]: New session 4 of user core. Dec 16 12:32:58.541974 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:32:58.595465 sshd[1689]: Connection closed by 10.0.0.1 port 45264 Dec 16 12:32:58.596093 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:58.608802 systemd[1]: sshd@3-10.0.0.83:22-10.0.0.1:45264.service: Deactivated successfully. Dec 16 12:32:58.612225 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:32:58.613106 systemd-logind[1509]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:32:58.615509 systemd[1]: Started sshd@4-10.0.0.83:22-10.0.0.1:45276.service - OpenSSH per-connection server daemon (10.0.0.1:45276). Dec 16 12:32:58.616351 systemd-logind[1509]: Removed session 4. Dec 16 12:32:58.676144 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 45276 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:32:58.677489 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:58.685041 systemd-logind[1509]: New session 5 of user core. Dec 16 12:32:58.699734 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:32:58.757996 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:32:58.758352 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:32:58.775854 sudo[1700]: pam_unix(sudo:session): session closed for user root Dec 16 12:32:58.777397 sshd[1699]: Connection closed by 10.0.0.1 port 45276 Dec 16 12:32:58.777962 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:58.788885 systemd[1]: sshd@4-10.0.0.83:22-10.0.0.1:45276.service: Deactivated successfully. Dec 16 12:32:58.790329 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:32:58.792202 systemd-logind[1509]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:32:58.793405 systemd[1]: Started sshd@5-10.0.0.83:22-10.0.0.1:45280.service - OpenSSH per-connection server daemon (10.0.0.1:45280). Dec 16 12:32:58.795061 systemd-logind[1509]: Removed session 5. Dec 16 12:32:58.858568 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 45280 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:32:58.859864 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:58.864555 systemd-logind[1509]: New session 6 of user core. Dec 16 12:32:58.869972 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:32:58.921405 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:32:58.921676 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:32:58.999680 sudo[1711]: pam_unix(sudo:session): session closed for user root Dec 16 12:32:59.005033 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:32:59.005394 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:32:59.016113 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:32:59.057565 augenrules[1733]: No rules Dec 16 12:32:59.059007 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:32:59.060825 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:32:59.063039 sudo[1710]: pam_unix(sudo:session): session closed for user root Dec 16 12:32:59.064805 sshd[1709]: Connection closed by 10.0.0.1 port 45280 Dec 16 12:32:59.065023 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:59.071806 systemd[1]: sshd@5-10.0.0.83:22-10.0.0.1:45280.service: Deactivated successfully. Dec 16 12:32:59.074124 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:32:59.074832 systemd-logind[1509]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:32:59.077169 systemd[1]: Started sshd@6-10.0.0.83:22-10.0.0.1:45296.service - OpenSSH per-connection server daemon (10.0.0.1:45296). Dec 16 12:32:59.079228 systemd-logind[1509]: Removed session 6. Dec 16 12:32:59.137149 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 45296 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:32:59.138852 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:59.143533 systemd-logind[1509]: New session 7 of user core. Dec 16 12:32:59.149946 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:32:59.201417 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:32:59.201678 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:32:59.493787 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:32:59.515164 (dockerd)[1766]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:32:59.729808 dockerd[1766]: time="2025-12-16T12:32:59.729588223Z" level=info msg="Starting up" Dec 16 12:32:59.730803 dockerd[1766]: time="2025-12-16T12:32:59.730767206Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:32:59.742462 dockerd[1766]: time="2025-12-16T12:32:59.742399742Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:32:59.782375 dockerd[1766]: time="2025-12-16T12:32:59.782265581Z" level=info msg="Loading containers: start." Dec 16 12:32:59.793826 kernel: Initializing XFRM netlink socket Dec 16 12:33:00.006624 systemd-networkd[1443]: docker0: Link UP Dec 16 12:33:00.010598 dockerd[1766]: time="2025-12-16T12:33:00.010535843Z" level=info msg="Loading containers: done." Dec 16 12:33:00.024886 dockerd[1766]: time="2025-12-16T12:33:00.024828006Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:33:00.025039 dockerd[1766]: time="2025-12-16T12:33:00.024918183Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:33:00.025039 dockerd[1766]: time="2025-12-16T12:33:00.025004419Z" level=info msg="Initializing buildkit" Dec 16 12:33:00.048657 dockerd[1766]: time="2025-12-16T12:33:00.048537303Z" level=info msg="Completed buildkit initialization" Dec 16 12:33:00.055946 dockerd[1766]: time="2025-12-16T12:33:00.055897441Z" level=info msg="Daemon has completed initialization" Dec 16 12:33:00.056523 dockerd[1766]: time="2025-12-16T12:33:00.055973683Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:33:00.056128 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:33:00.574487 containerd[1525]: time="2025-12-16T12:33:00.574441891Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 12:33:01.245425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1042455738.mount: Deactivated successfully. Dec 16 12:33:02.436801 containerd[1525]: time="2025-12-16T12:33:02.435948680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:02.437547 containerd[1525]: time="2025-12-16T12:33:02.437508058Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387283" Dec 16 12:33:02.438636 containerd[1525]: time="2025-12-16T12:33:02.438592712Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:02.441747 containerd[1525]: time="2025-12-16T12:33:02.441703144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:02.442894 containerd[1525]: time="2025-12-16T12:33:02.442852412Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.868363415s" Dec 16 12:33:02.442931 containerd[1525]: time="2025-12-16T12:33:02.442899418Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 16 12:33:02.444147 containerd[1525]: time="2025-12-16T12:33:02.444102465Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 12:33:03.517563 containerd[1525]: time="2025-12-16T12:33:03.517495028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:03.519027 containerd[1525]: time="2025-12-16T12:33:03.518990935Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553083" Dec 16 12:33:03.520196 containerd[1525]: time="2025-12-16T12:33:03.520148826Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:03.526322 containerd[1525]: time="2025-12-16T12:33:03.526255698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:03.527490 containerd[1525]: time="2025-12-16T12:33:03.527375496Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.08323881s" Dec 16 12:33:03.527490 containerd[1525]: time="2025-12-16T12:33:03.527407134Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 16 12:33:03.528307 containerd[1525]: time="2025-12-16T12:33:03.528277853Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 12:33:03.536844 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:33:03.538348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:33:03.703578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:33:03.708604 (kubelet)[2053]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:33:03.765533 kubelet[2053]: E1216 12:33:03.765454 2053 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:33:03.768700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:33:03.768863 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:33:03.769206 systemd[1]: kubelet.service: Consumed 157ms CPU time, 106.1M memory peak. Dec 16 12:33:04.832733 containerd[1525]: time="2025-12-16T12:33:04.832678550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:04.833826 containerd[1525]: time="2025-12-16T12:33:04.833559204Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298069" Dec 16 12:33:04.834505 containerd[1525]: time="2025-12-16T12:33:04.834460823Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:04.837555 containerd[1525]: time="2025-12-16T12:33:04.837513182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:04.839215 containerd[1525]: time="2025-12-16T12:33:04.839174851Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.31078168s" Dec 16 12:33:04.839259 containerd[1525]: time="2025-12-16T12:33:04.839221602Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 16 12:33:04.839709 containerd[1525]: time="2025-12-16T12:33:04.839671415Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 12:33:05.678381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3386394876.mount: Deactivated successfully. Dec 16 12:33:05.911662 containerd[1525]: time="2025-12-16T12:33:05.911613227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:05.912727 containerd[1525]: time="2025-12-16T12:33:05.912506000Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258675" Dec 16 12:33:05.913524 containerd[1525]: time="2025-12-16T12:33:05.913490382Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:05.915636 containerd[1525]: time="2025-12-16T12:33:05.915605930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:05.916105 containerd[1525]: time="2025-12-16T12:33:05.916073026Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.07635952s" Dec 16 12:33:05.916105 containerd[1525]: time="2025-12-16T12:33:05.916104320Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 16 12:33:05.916553 containerd[1525]: time="2025-12-16T12:33:05.916508669Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 12:33:06.492488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177684228.mount: Deactivated successfully. Dec 16 12:33:07.511800 containerd[1525]: time="2025-12-16T12:33:07.511563316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:07.512151 containerd[1525]: time="2025-12-16T12:33:07.511981071Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Dec 16 12:33:07.513085 containerd[1525]: time="2025-12-16T12:33:07.513043786Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:07.516620 containerd[1525]: time="2025-12-16T12:33:07.516586394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:07.518254 containerd[1525]: time="2025-12-16T12:33:07.518221284Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.601683032s" Dec 16 12:33:07.518306 containerd[1525]: time="2025-12-16T12:33:07.518265512Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 16 12:33:07.518663 containerd[1525]: time="2025-12-16T12:33:07.518636447Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:33:07.947149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681120293.mount: Deactivated successfully. Dec 16 12:33:07.956505 containerd[1525]: time="2025-12-16T12:33:07.956101787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:33:07.956947 containerd[1525]: time="2025-12-16T12:33:07.956914526Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 16 12:33:07.958062 containerd[1525]: time="2025-12-16T12:33:07.958013412Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:33:07.961309 containerd[1525]: time="2025-12-16T12:33:07.961254837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:33:07.962417 containerd[1525]: time="2025-12-16T12:33:07.961988731Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 443.318784ms" Dec 16 12:33:07.962417 containerd[1525]: time="2025-12-16T12:33:07.962036549Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 12:33:07.962578 containerd[1525]: time="2025-12-16T12:33:07.962509500Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 12:33:08.480257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3803632744.mount: Deactivated successfully. Dec 16 12:33:10.099042 containerd[1525]: time="2025-12-16T12:33:10.098983366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:10.099524 containerd[1525]: time="2025-12-16T12:33:10.099488806Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013653" Dec 16 12:33:10.100608 containerd[1525]: time="2025-12-16T12:33:10.100559619Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:10.104799 containerd[1525]: time="2025-12-16T12:33:10.104500352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:10.105517 containerd[1525]: time="2025-12-16T12:33:10.105480428Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.142922826s" Dec 16 12:33:10.105592 containerd[1525]: time="2025-12-16T12:33:10.105518774Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 16 12:33:14.019233 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:33:14.020662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:33:14.188601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:33:14.204111 (kubelet)[2215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:33:14.244943 kubelet[2215]: E1216 12:33:14.244887 2215 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:33:14.247670 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:33:14.247838 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:33:14.248173 systemd[1]: kubelet.service: Consumed 142ms CPU time, 109.8M memory peak. Dec 16 12:33:14.303025 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:33:14.303158 systemd[1]: kubelet.service: Consumed 142ms CPU time, 109.8M memory peak. Dec 16 12:33:14.305045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:33:14.325041 systemd[1]: Reload requested from client PID 2228 ('systemctl') (unit session-7.scope)... Dec 16 12:33:14.325057 systemd[1]: Reloading... Dec 16 12:33:14.390953 zram_generator::config[2273]: No configuration found. Dec 16 12:33:14.671450 systemd[1]: Reloading finished in 346 ms. Dec 16 12:33:14.737056 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:33:14.738278 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:33:14.738506 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:33:14.738554 systemd[1]: kubelet.service: Consumed 96ms CPU time, 95.2M memory peak. Dec 16 12:33:14.740926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:33:14.863379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:33:14.871103 (kubelet)[2319]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:33:14.901699 kubelet[2319]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:33:14.901699 kubelet[2319]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:33:14.901699 kubelet[2319]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:33:14.902056 kubelet[2319]: I1216 12:33:14.901743 2319 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:33:16.138402 kubelet[2319]: I1216 12:33:16.138351 2319 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:33:16.138402 kubelet[2319]: I1216 12:33:16.138384 2319 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:33:16.138754 kubelet[2319]: I1216 12:33:16.138594 2319 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:33:16.161909 kubelet[2319]: E1216 12:33:16.161836 2319 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:33:16.162391 kubelet[2319]: I1216 12:33:16.162347 2319 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:33:16.169161 kubelet[2319]: I1216 12:33:16.169137 2319 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:33:16.172328 kubelet[2319]: I1216 12:33:16.172291 2319 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:33:16.173523 kubelet[2319]: I1216 12:33:16.173449 2319 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:33:16.173751 kubelet[2319]: I1216 12:33:16.173493 2319 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:33:16.173875 kubelet[2319]: I1216 12:33:16.173822 2319 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:33:16.173875 kubelet[2319]: I1216 12:33:16.173832 2319 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:33:16.175021 kubelet[2319]: I1216 12:33:16.174560 2319 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:33:16.177106 kubelet[2319]: I1216 12:33:16.177072 2319 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:33:16.177106 kubelet[2319]: I1216 12:33:16.177094 2319 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:33:16.177200 kubelet[2319]: I1216 12:33:16.177126 2319 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:33:16.178169 kubelet[2319]: I1216 12:33:16.178145 2319 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:33:16.180176 kubelet[2319]: I1216 12:33:16.179329 2319 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:33:16.180176 kubelet[2319]: I1216 12:33:16.180070 2319 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:33:16.180915 kubelet[2319]: W1216 12:33:16.180886 2319 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:33:16.181501 kubelet[2319]: E1216 12:33:16.181462 2319 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:33:16.181741 kubelet[2319]: E1216 12:33:16.181720 2319 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:33:16.184120 kubelet[2319]: I1216 12:33:16.184075 2319 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:33:16.184245 kubelet[2319]: I1216 12:33:16.184134 2319 server.go:1289] "Started kubelet" Dec 16 12:33:16.184692 kubelet[2319]: I1216 12:33:16.184634 2319 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:33:16.190789 kubelet[2319]: I1216 12:33:16.190695 2319 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:33:16.191717 kubelet[2319]: I1216 12:33:16.191129 2319 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:33:16.191717 kubelet[2319]: I1216 12:33:16.191573 2319 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:33:16.191836 kubelet[2319]: E1216 12:33:16.190835 2319 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.83:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.83:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b223e789b093 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 12:33:16.184105107 +0000 UTC m=+1.309834766,LastTimestamp:2025-12-16 12:33:16.184105107 +0000 UTC m=+1.309834766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 12:33:16.191939 kubelet[2319]: I1216 12:33:16.191918 2319 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:33:16.191971 kubelet[2319]: I1216 12:33:16.191938 2319 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:33:16.192444 kubelet[2319]: I1216 12:33:16.192410 2319 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:33:16.192552 kubelet[2319]: I1216 12:33:16.192533 2319 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:33:16.192616 kubelet[2319]: I1216 12:33:16.192602 2319 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:33:16.193031 kubelet[2319]: E1216 12:33:16.192995 2319 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:33:16.193500 kubelet[2319]: E1216 12:33:16.193463 2319 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:33:16.193586 kubelet[2319]: E1216 12:33:16.193561 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="200ms" Dec 16 12:33:16.194011 kubelet[2319]: I1216 12:33:16.193978 2319 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:33:16.194343 kubelet[2319]: I1216 12:33:16.194318 2319 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:33:16.194690 kubelet[2319]: E1216 12:33:16.194657 2319 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:33:16.195305 kubelet[2319]: I1216 12:33:16.195285 2319 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:33:16.205167 kubelet[2319]: I1216 12:33:16.205145 2319 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:33:16.205167 kubelet[2319]: I1216 12:33:16.205160 2319 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:33:16.205301 kubelet[2319]: I1216 12:33:16.205180 2319 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:33:16.210656 kubelet[2319]: I1216 12:33:16.210499 2319 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:33:16.211837 kubelet[2319]: I1216 12:33:16.211545 2319 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:33:16.211837 kubelet[2319]: I1216 12:33:16.211566 2319 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:33:16.211837 kubelet[2319]: I1216 12:33:16.211586 2319 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:33:16.211837 kubelet[2319]: I1216 12:33:16.211596 2319 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:33:16.211837 kubelet[2319]: E1216 12:33:16.211633 2319 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:33:16.215529 kubelet[2319]: E1216 12:33:16.215494 2319 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:33:16.293966 kubelet[2319]: E1216 12:33:16.293918 2319 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:33:16.312161 kubelet[2319]: E1216 12:33:16.312125 2319 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 12:33:16.331606 kubelet[2319]: I1216 12:33:16.331578 2319 policy_none.go:49] "None policy: Start" Dec 16 12:33:16.331606 kubelet[2319]: I1216 12:33:16.331610 2319 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:33:16.331712 kubelet[2319]: I1216 12:33:16.331624 2319 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:33:16.338486 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:33:16.354176 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:33:16.357813 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:33:16.377838 kubelet[2319]: E1216 12:33:16.377796 2319 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:33:16.378040 kubelet[2319]: I1216 12:33:16.378021 2319 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:33:16.378098 kubelet[2319]: I1216 12:33:16.378037 2319 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:33:16.378589 kubelet[2319]: I1216 12:33:16.378566 2319 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:33:16.379429 kubelet[2319]: E1216 12:33:16.379403 2319 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:33:16.379493 kubelet[2319]: E1216 12:33:16.379479 2319 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 12:33:16.394146 kubelet[2319]: E1216 12:33:16.394023 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="400ms" Dec 16 12:33:16.479477 kubelet[2319]: I1216 12:33:16.479420 2319 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:33:16.480794 kubelet[2319]: E1216 12:33:16.479917 2319 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Dec 16 12:33:16.530497 systemd[1]: Created slice kubepods-burstable-pod9e7d639146e9a4af02157bbb01a13219.slice - libcontainer container kubepods-burstable-pod9e7d639146e9a4af02157bbb01a13219.slice. Dec 16 12:33:16.563497 kubelet[2319]: E1216 12:33:16.563456 2319 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:33:16.567170 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 16 12:33:16.571148 kubelet[2319]: E1216 12:33:16.570932 2319 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:33:16.594717 kubelet[2319]: I1216 12:33:16.594686 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e7d639146e9a4af02157bbb01a13219-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e7d639146e9a4af02157bbb01a13219\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:16.594864 kubelet[2319]: I1216 12:33:16.594847 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e7d639146e9a4af02157bbb01a13219-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9e7d639146e9a4af02157bbb01a13219\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:16.594957 kubelet[2319]: I1216 12:33:16.594945 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:16.595011 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 16 12:33:16.595111 kubelet[2319]: I1216 12:33:16.595006 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:16.595674 kubelet[2319]: I1216 12:33:16.595157 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:16.595674 kubelet[2319]: I1216 12:33:16.595180 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:16.595674 kubelet[2319]: I1216 12:33:16.595197 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e7d639146e9a4af02157bbb01a13219-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e7d639146e9a4af02157bbb01a13219\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:16.595674 kubelet[2319]: I1216 12:33:16.595212 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:16.595674 kubelet[2319]: I1216 12:33:16.595228 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:33:16.597094 kubelet[2319]: E1216 12:33:16.597055 2319 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:33:16.683140 kubelet[2319]: I1216 12:33:16.683040 2319 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:33:16.683442 kubelet[2319]: E1216 12:33:16.683390 2319 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Dec 16 12:33:16.794515 kubelet[2319]: E1216 12:33:16.794469 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="800ms" Dec 16 12:33:16.865002 containerd[1525]: time="2025-12-16T12:33:16.864940933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9e7d639146e9a4af02157bbb01a13219,Namespace:kube-system,Attempt:0,}" Dec 16 12:33:16.873245 containerd[1525]: time="2025-12-16T12:33:16.873194797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 16 12:33:16.897411 containerd[1525]: time="2025-12-16T12:33:16.897352435Z" level=info msg="connecting to shim f5f04b7583f5b0b08aafdc2c02ec1ede6bcfae40593e18e08aca95dcd059722e" address="unix:///run/containerd/s/38d9de0579bd6b9501cfe9f6b5f4fab6595d757b6828514e434c4160f48b2fbd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:33:16.899109 containerd[1525]: time="2025-12-16T12:33:16.898745916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 16 12:33:16.913712 containerd[1525]: time="2025-12-16T12:33:16.913670040Z" level=info msg="connecting to shim ba9475e254e367a2de9d10cf3c508eaf6696ed0e7dc18bffa9cfd863767f36cc" address="unix:///run/containerd/s/b29eb0bdddea4cfa56cfa601555dd85d3db51e227ede98247c698fc283bd1a5a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:33:16.929799 containerd[1525]: time="2025-12-16T12:33:16.929636789Z" level=info msg="connecting to shim af7a56966d539c3fa0782977ee592f7b34c2dc0d1c741cc0b27f70b732025f35" address="unix:///run/containerd/s/a7029da0ac8701fd699ab1b7bec4ade43900f9f37614dfe04a1b7c23d0a14153" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:33:16.949981 systemd[1]: Started cri-containerd-ba9475e254e367a2de9d10cf3c508eaf6696ed0e7dc18bffa9cfd863767f36cc.scope - libcontainer container ba9475e254e367a2de9d10cf3c508eaf6696ed0e7dc18bffa9cfd863767f36cc. Dec 16 12:33:16.953624 systemd[1]: Started cri-containerd-f5f04b7583f5b0b08aafdc2c02ec1ede6bcfae40593e18e08aca95dcd059722e.scope - libcontainer container f5f04b7583f5b0b08aafdc2c02ec1ede6bcfae40593e18e08aca95dcd059722e. Dec 16 12:33:16.958588 systemd[1]: Started cri-containerd-af7a56966d539c3fa0782977ee592f7b34c2dc0d1c741cc0b27f70b732025f35.scope - libcontainer container af7a56966d539c3fa0782977ee592f7b34c2dc0d1c741cc0b27f70b732025f35. Dec 16 12:33:17.004602 containerd[1525]: time="2025-12-16T12:33:17.004550036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba9475e254e367a2de9d10cf3c508eaf6696ed0e7dc18bffa9cfd863767f36cc\"" Dec 16 12:33:17.009171 containerd[1525]: time="2025-12-16T12:33:17.009116273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9e7d639146e9a4af02157bbb01a13219,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5f04b7583f5b0b08aafdc2c02ec1ede6bcfae40593e18e08aca95dcd059722e\"" Dec 16 12:33:17.010485 containerd[1525]: time="2025-12-16T12:33:17.010188480Z" level=info msg="CreateContainer within sandbox \"ba9475e254e367a2de9d10cf3c508eaf6696ed0e7dc18bffa9cfd863767f36cc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:33:17.011220 containerd[1525]: time="2025-12-16T12:33:17.011166195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"af7a56966d539c3fa0782977ee592f7b34c2dc0d1c741cc0b27f70b732025f35\"" Dec 16 12:33:17.015796 containerd[1525]: time="2025-12-16T12:33:17.015738861Z" level=info msg="CreateContainer within sandbox \"f5f04b7583f5b0b08aafdc2c02ec1ede6bcfae40593e18e08aca95dcd059722e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:33:17.018218 containerd[1525]: time="2025-12-16T12:33:17.017863028Z" level=info msg="CreateContainer within sandbox \"af7a56966d539c3fa0782977ee592f7b34c2dc0d1c741cc0b27f70b732025f35\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:33:17.021817 containerd[1525]: time="2025-12-16T12:33:17.021463611Z" level=info msg="Container 5e8f3cb6dcb1af64780202d7343362fe43cb20313ec8b711ab7b6b3dc2f5f403: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:17.026821 containerd[1525]: time="2025-12-16T12:33:17.026524556Z" level=info msg="Container 7563dbf3fc3a78f63f42ed81b1bec4b0b6847a56be1bd0af34db196f1b1cad60: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:17.033298 kubelet[2319]: E1216 12:33:17.033251 2319 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:33:17.033516 containerd[1525]: time="2025-12-16T12:33:17.033475553Z" level=info msg="CreateContainer within sandbox \"ba9475e254e367a2de9d10cf3c508eaf6696ed0e7dc18bffa9cfd863767f36cc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5e8f3cb6dcb1af64780202d7343362fe43cb20313ec8b711ab7b6b3dc2f5f403\"" Dec 16 12:33:17.034386 containerd[1525]: time="2025-12-16T12:33:17.034178057Z" level=info msg="StartContainer for \"5e8f3cb6dcb1af64780202d7343362fe43cb20313ec8b711ab7b6b3dc2f5f403\"" Dec 16 12:33:17.035515 containerd[1525]: time="2025-12-16T12:33:17.035486855Z" level=info msg="connecting to shim 5e8f3cb6dcb1af64780202d7343362fe43cb20313ec8b711ab7b6b3dc2f5f403" address="unix:///run/containerd/s/b29eb0bdddea4cfa56cfa601555dd85d3db51e227ede98247c698fc283bd1a5a" protocol=ttrpc version=3 Dec 16 12:33:17.038698 containerd[1525]: time="2025-12-16T12:33:17.038656910Z" level=info msg="Container e0b697851ded8b80a876f78630ee883d14e62b3e61ebafd2d4a99e50066a2f5b: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:17.040872 containerd[1525]: time="2025-12-16T12:33:17.040817619Z" level=info msg="CreateContainer within sandbox \"f5f04b7583f5b0b08aafdc2c02ec1ede6bcfae40593e18e08aca95dcd059722e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7563dbf3fc3a78f63f42ed81b1bec4b0b6847a56be1bd0af34db196f1b1cad60\"" Dec 16 12:33:17.041716 containerd[1525]: time="2025-12-16T12:33:17.041682869Z" level=info msg="StartContainer for \"7563dbf3fc3a78f63f42ed81b1bec4b0b6847a56be1bd0af34db196f1b1cad60\"" Dec 16 12:33:17.046324 containerd[1525]: time="2025-12-16T12:33:17.046289922Z" level=info msg="connecting to shim 7563dbf3fc3a78f63f42ed81b1bec4b0b6847a56be1bd0af34db196f1b1cad60" address="unix:///run/containerd/s/38d9de0579bd6b9501cfe9f6b5f4fab6595d757b6828514e434c4160f48b2fbd" protocol=ttrpc version=3 Dec 16 12:33:17.048796 containerd[1525]: time="2025-12-16T12:33:17.047523518Z" level=info msg="CreateContainer within sandbox \"af7a56966d539c3fa0782977ee592f7b34c2dc0d1c741cc0b27f70b732025f35\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e0b697851ded8b80a876f78630ee883d14e62b3e61ebafd2d4a99e50066a2f5b\"" Dec 16 12:33:17.049439 containerd[1525]: time="2025-12-16T12:33:17.049403346Z" level=info msg="StartContainer for \"e0b697851ded8b80a876f78630ee883d14e62b3e61ebafd2d4a99e50066a2f5b\"" Dec 16 12:33:17.050499 containerd[1525]: time="2025-12-16T12:33:17.050469083Z" level=info msg="connecting to shim e0b697851ded8b80a876f78630ee883d14e62b3e61ebafd2d4a99e50066a2f5b" address="unix:///run/containerd/s/a7029da0ac8701fd699ab1b7bec4ade43900f9f37614dfe04a1b7c23d0a14153" protocol=ttrpc version=3 Dec 16 12:33:17.054128 systemd[1]: Started cri-containerd-5e8f3cb6dcb1af64780202d7343362fe43cb20313ec8b711ab7b6b3dc2f5f403.scope - libcontainer container 5e8f3cb6dcb1af64780202d7343362fe43cb20313ec8b711ab7b6b3dc2f5f403. Dec 16 12:33:17.073093 systemd[1]: Started cri-containerd-7563dbf3fc3a78f63f42ed81b1bec4b0b6847a56be1bd0af34db196f1b1cad60.scope - libcontainer container 7563dbf3fc3a78f63f42ed81b1bec4b0b6847a56be1bd0af34db196f1b1cad60. Dec 16 12:33:17.076164 systemd[1]: Started cri-containerd-e0b697851ded8b80a876f78630ee883d14e62b3e61ebafd2d4a99e50066a2f5b.scope - libcontainer container e0b697851ded8b80a876f78630ee883d14e62b3e61ebafd2d4a99e50066a2f5b. Dec 16 12:33:17.087483 kubelet[2319]: I1216 12:33:17.087453 2319 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:33:17.087830 kubelet[2319]: E1216 12:33:17.087801 2319 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Dec 16 12:33:17.118918 containerd[1525]: time="2025-12-16T12:33:17.118760031Z" level=info msg="StartContainer for \"5e8f3cb6dcb1af64780202d7343362fe43cb20313ec8b711ab7b6b3dc2f5f403\" returns successfully" Dec 16 12:33:17.122837 containerd[1525]: time="2025-12-16T12:33:17.121456424Z" level=info msg="StartContainer for \"7563dbf3fc3a78f63f42ed81b1bec4b0b6847a56be1bd0af34db196f1b1cad60\" returns successfully" Dec 16 12:33:17.129873 containerd[1525]: time="2025-12-16T12:33:17.129827126Z" level=info msg="StartContainer for \"e0b697851ded8b80a876f78630ee883d14e62b3e61ebafd2d4a99e50066a2f5b\" returns successfully" Dec 16 12:33:17.222720 kubelet[2319]: E1216 12:33:17.222618 2319 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:33:17.225311 kubelet[2319]: E1216 12:33:17.225285 2319 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:33:17.229788 kubelet[2319]: E1216 12:33:17.228893 2319 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:33:17.892806 kubelet[2319]: I1216 12:33:17.892182 2319 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:33:18.231649 kubelet[2319]: E1216 12:33:18.231529 2319 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:33:18.232732 kubelet[2319]: E1216 12:33:18.232506 2319 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:33:18.233602 kubelet[2319]: E1216 12:33:18.233574 2319 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:33:19.053130 kubelet[2319]: E1216 12:33:19.052156 2319 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 16 12:33:19.160461 kubelet[2319]: E1216 12:33:19.160222 2319 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1881b223e789b093 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 12:33:16.184105107 +0000 UTC m=+1.309834766,LastTimestamp:2025-12-16 12:33:16.184105107 +0000 UTC m=+1.309834766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 12:33:19.212951 kubelet[2319]: I1216 12:33:19.212907 2319 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:33:19.213058 kubelet[2319]: E1216 12:33:19.212978 2319 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 16 12:33:19.229112 kubelet[2319]: E1216 12:33:19.229042 2319 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:33:19.329331 kubelet[2319]: E1216 12:33:19.329253 2319 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:33:19.429707 kubelet[2319]: E1216 12:33:19.429642 2319 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:33:19.530310 kubelet[2319]: E1216 12:33:19.530265 2319 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:33:19.631137 kubelet[2319]: E1216 12:33:19.631017 2319 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:33:19.695590 kubelet[2319]: I1216 12:33:19.695530 2319 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:19.701734 kubelet[2319]: E1216 12:33:19.701665 2319 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:19.701734 kubelet[2319]: I1216 12:33:19.701701 2319 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:33:19.703414 kubelet[2319]: E1216 12:33:19.703383 2319 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 12:33:19.703414 kubelet[2319]: I1216 12:33:19.703412 2319 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:19.705138 kubelet[2319]: E1216 12:33:19.705090 2319 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:20.181627 kubelet[2319]: I1216 12:33:20.181591 2319 apiserver.go:52] "Watching apiserver" Dec 16 12:33:20.193058 kubelet[2319]: I1216 12:33:20.192868 2319 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:33:21.316311 systemd[1]: Reload requested from client PID 2602 ('systemctl') (unit session-7.scope)... Dec 16 12:33:21.316326 systemd[1]: Reloading... Dec 16 12:33:21.334379 kubelet[2319]: I1216 12:33:21.334045 2319 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:21.386886 zram_generator::config[2645]: No configuration found. Dec 16 12:33:21.576327 systemd[1]: Reloading finished in 259 ms. Dec 16 12:33:21.603638 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:33:21.617113 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:33:21.617378 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:33:21.617439 systemd[1]: kubelet.service: Consumed 1.714s CPU time, 126.8M memory peak. Dec 16 12:33:21.619422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:33:21.772423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:33:21.792146 (kubelet)[2687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:33:21.837563 kubelet[2687]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:33:21.837563 kubelet[2687]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:33:21.837563 kubelet[2687]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:33:21.837563 kubelet[2687]: I1216 12:33:21.837521 2687 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:33:21.844250 kubelet[2687]: I1216 12:33:21.844081 2687 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:33:21.844250 kubelet[2687]: I1216 12:33:21.844113 2687 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:33:21.844403 kubelet[2687]: I1216 12:33:21.844350 2687 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:33:21.849241 kubelet[2687]: I1216 12:33:21.846236 2687 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:33:21.849241 kubelet[2687]: I1216 12:33:21.849242 2687 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:33:21.854006 kubelet[2687]: I1216 12:33:21.853947 2687 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:33:21.857371 kubelet[2687]: I1216 12:33:21.857342 2687 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:33:21.857578 kubelet[2687]: I1216 12:33:21.857548 2687 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:33:21.857721 kubelet[2687]: I1216 12:33:21.857574 2687 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:33:21.857996 kubelet[2687]: I1216 12:33:21.857732 2687 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:33:21.857996 kubelet[2687]: I1216 12:33:21.857740 2687 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:33:21.857996 kubelet[2687]: I1216 12:33:21.857801 2687 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:33:21.857996 kubelet[2687]: I1216 12:33:21.857938 2687 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:33:21.857996 kubelet[2687]: I1216 12:33:21.857954 2687 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:33:21.857996 kubelet[2687]: I1216 12:33:21.857985 2687 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:33:21.857996 kubelet[2687]: I1216 12:33:21.857996 2687 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:33:21.859656 kubelet[2687]: I1216 12:33:21.858930 2687 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:33:21.859656 kubelet[2687]: I1216 12:33:21.859624 2687 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:33:21.863969 kubelet[2687]: I1216 12:33:21.863146 2687 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:33:21.863969 kubelet[2687]: I1216 12:33:21.863204 2687 server.go:1289] "Started kubelet" Dec 16 12:33:21.864090 kubelet[2687]: I1216 12:33:21.863995 2687 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:33:21.865158 kubelet[2687]: I1216 12:33:21.864962 2687 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:33:21.866553 kubelet[2687]: I1216 12:33:21.866151 2687 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:33:21.870645 kubelet[2687]: I1216 12:33:21.864018 2687 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:33:21.870645 kubelet[2687]: I1216 12:33:21.868752 2687 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:33:21.876014 kubelet[2687]: I1216 12:33:21.875972 2687 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:33:21.878940 kubelet[2687]: E1216 12:33:21.878881 2687 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:33:21.880624 kubelet[2687]: I1216 12:33:21.878291 2687 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:33:21.882829 kubelet[2687]: I1216 12:33:21.881928 2687 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:33:21.882829 kubelet[2687]: I1216 12:33:21.882047 2687 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:33:21.883383 kubelet[2687]: E1216 12:33:21.883013 2687 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:33:21.884986 kubelet[2687]: I1216 12:33:21.884932 2687 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:33:21.885306 kubelet[2687]: I1216 12:33:21.885282 2687 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:33:21.891871 kubelet[2687]: I1216 12:33:21.890582 2687 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:33:21.901592 kubelet[2687]: I1216 12:33:21.901523 2687 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:33:21.903854 kubelet[2687]: I1216 12:33:21.903816 2687 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:33:21.903854 kubelet[2687]: I1216 12:33:21.903846 2687 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:33:21.903969 kubelet[2687]: I1216 12:33:21.903866 2687 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:33:21.903969 kubelet[2687]: I1216 12:33:21.903874 2687 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:33:21.903969 kubelet[2687]: E1216 12:33:21.903913 2687 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:33:21.938197 kubelet[2687]: I1216 12:33:21.938167 2687 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:33:21.938197 kubelet[2687]: I1216 12:33:21.938189 2687 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:33:21.938197 kubelet[2687]: I1216 12:33:21.938213 2687 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:33:21.938366 kubelet[2687]: I1216 12:33:21.938353 2687 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:33:21.938389 kubelet[2687]: I1216 12:33:21.938362 2687 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:33:21.938389 kubelet[2687]: I1216 12:33:21.938379 2687 policy_none.go:49] "None policy: Start" Dec 16 12:33:21.938389 kubelet[2687]: I1216 12:33:21.938388 2687 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:33:21.938455 kubelet[2687]: I1216 12:33:21.938397 2687 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:33:21.938500 kubelet[2687]: I1216 12:33:21.938485 2687 state_mem.go:75] "Updated machine memory state" Dec 16 12:33:21.942863 kubelet[2687]: E1216 12:33:21.942832 2687 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:33:21.943029 kubelet[2687]: I1216 12:33:21.943010 2687 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:33:21.943363 kubelet[2687]: I1216 12:33:21.943028 2687 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:33:21.943363 kubelet[2687]: I1216 12:33:21.943265 2687 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:33:21.944089 kubelet[2687]: E1216 12:33:21.943959 2687 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:33:22.005723 kubelet[2687]: I1216 12:33:22.005617 2687 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:22.005723 kubelet[2687]: I1216 12:33:22.005649 2687 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:33:22.005723 kubelet[2687]: I1216 12:33:22.005707 2687 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:22.012792 kubelet[2687]: E1216 12:33:22.012724 2687 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:22.045071 kubelet[2687]: I1216 12:33:22.045028 2687 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:33:22.053604 kubelet[2687]: I1216 12:33:22.053571 2687 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 12:33:22.053768 kubelet[2687]: I1216 12:33:22.053668 2687 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:33:22.086709 kubelet[2687]: I1216 12:33:22.086653 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:22.086709 kubelet[2687]: I1216 12:33:22.086700 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:22.086709 kubelet[2687]: I1216 12:33:22.086722 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:22.086932 kubelet[2687]: I1216 12:33:22.086740 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e7d639146e9a4af02157bbb01a13219-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e7d639146e9a4af02157bbb01a13219\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:22.086932 kubelet[2687]: I1216 12:33:22.086756 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e7d639146e9a4af02157bbb01a13219-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9e7d639146e9a4af02157bbb01a13219\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:22.086932 kubelet[2687]: I1216 12:33:22.086802 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:22.086932 kubelet[2687]: I1216 12:33:22.086823 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:22.086932 kubelet[2687]: I1216 12:33:22.086838 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:33:22.087046 kubelet[2687]: I1216 12:33:22.086852 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e7d639146e9a4af02157bbb01a13219-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e7d639146e9a4af02157bbb01a13219\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:22.360208 sudo[2725]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 16 12:33:22.360501 sudo[2725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 16 12:33:22.690840 sudo[2725]: pam_unix(sudo:session): session closed for user root Dec 16 12:33:22.859485 kubelet[2687]: I1216 12:33:22.859439 2687 apiserver.go:52] "Watching apiserver" Dec 16 12:33:22.881636 kubelet[2687]: I1216 12:33:22.881590 2687 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:33:22.926892 kubelet[2687]: I1216 12:33:22.926866 2687 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:22.927158 kubelet[2687]: I1216 12:33:22.927139 2687 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:22.927900 kubelet[2687]: I1216 12:33:22.927881 2687 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:33:22.932830 kubelet[2687]: E1216 12:33:22.932804 2687 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:22.933196 kubelet[2687]: E1216 12:33:22.933162 2687 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:22.936762 kubelet[2687]: E1216 12:33:22.936736 2687 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 12:33:22.948830 kubelet[2687]: I1216 12:33:22.948192 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.948179165 podStartE2EDuration="948.179165ms" podCreationTimestamp="2025-12-16 12:33:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:33:22.947055314 +0000 UTC m=+1.151080360" watchObservedRunningTime="2025-12-16 12:33:22.948179165 +0000 UTC m=+1.152204171" Dec 16 12:33:22.958129 kubelet[2687]: I1216 12:33:22.958080 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.958064162 podStartE2EDuration="958.064162ms" podCreationTimestamp="2025-12-16 12:33:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:33:22.957813326 +0000 UTC m=+1.161838372" watchObservedRunningTime="2025-12-16 12:33:22.958064162 +0000 UTC m=+1.162089208" Dec 16 12:33:22.971782 kubelet[2687]: I1216 12:33:22.971019 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.971003551 podStartE2EDuration="1.971003551s" podCreationTimestamp="2025-12-16 12:33:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:33:22.970875935 +0000 UTC m=+1.174900981" watchObservedRunningTime="2025-12-16 12:33:22.971003551 +0000 UTC m=+1.175028597" Dec 16 12:33:24.377909 sudo[1746]: pam_unix(sudo:session): session closed for user root Dec 16 12:33:24.379473 sshd[1745]: Connection closed by 10.0.0.1 port 45296 Dec 16 12:33:24.379889 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:24.383978 systemd[1]: sshd@6-10.0.0.83:22-10.0.0.1:45296.service: Deactivated successfully. Dec 16 12:33:24.385945 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:33:24.386183 systemd[1]: session-7.scope: Consumed 6.334s CPU time, 261.3M memory peak. Dec 16 12:33:24.387603 systemd-logind[1509]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:33:24.389119 systemd-logind[1509]: Removed session 7. Dec 16 12:33:27.496129 kubelet[2687]: I1216 12:33:27.495938 2687 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:33:27.496503 kubelet[2687]: I1216 12:33:27.496380 2687 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:33:27.496529 containerd[1525]: time="2025-12-16T12:33:27.496205486Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:33:28.349214 systemd[1]: Created slice kubepods-besteffort-pod94b5a36f_957a_4ff2_b662_73340e1ca2e1.slice - libcontainer container kubepods-besteffort-pod94b5a36f_957a_4ff2_b662_73340e1ca2e1.slice. Dec 16 12:33:28.362092 systemd[1]: Created slice kubepods-burstable-podc05999da_2e29_4654_a91e_c1548e9fbeae.slice - libcontainer container kubepods-burstable-podc05999da_2e29_4654_a91e_c1548e9fbeae.slice. Dec 16 12:33:28.435114 kubelet[2687]: I1216 12:33:28.435059 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-cilium-cgroup\") pod \"cilium-zktcq\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " pod="kube-system/cilium-zktcq" Dec 16 12:33:28.435114 kubelet[2687]: I1216 12:33:28.435105 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-xtables-lock\") pod \"cilium-zktcq\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " pod="kube-system/cilium-zktcq" Dec 16 12:33:28.435286 kubelet[2687]: I1216 12:33:28.435125 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c05999da-2e29-4654-a91e-c1548e9fbeae-clustermesh-secrets\") pod \"cilium-zktcq\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " pod="kube-system/cilium-zktcq" Dec 16 12:33:28.435286 kubelet[2687]: I1216 12:33:28.435152 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-host-proc-sys-net\") pod \"cilium-zktcq\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " pod="kube-system/cilium-zktcq" Dec 16 12:33:28.435286 kubelet[2687]: I1216 12:33:28.435192 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/94b5a36f-957a-4ff2-b662-73340e1ca2e1-kube-proxy\") pod \"kube-proxy-4cn59\" (UID: \"94b5a36f-957a-4ff2-b662-73340e1ca2e1\") " pod="kube-system/kube-proxy-4cn59" Dec 16 12:33:28.435286 kubelet[2687]: I1216 12:33:28.435255 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-cilium-run\") pod \"cilium-zktcq\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " pod="kube-system/cilium-zktcq" Dec 16 12:33:28.435286 kubelet[2687]: I1216 12:33:28.435283 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-lib-modules\") pod \"cilium-zktcq\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " pod="kube-system/cilium-zktcq" Dec 16 12:33:28.435385 kubelet[2687]: I1216 12:33:28.435335 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-host-proc-sys-kernel\") pod \"cilium-zktcq\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " pod="kube-system/cilium-zktcq" Dec 16 12:33:28.435385 kubelet[2687]: I1216 12:33:28.435355 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c05999da-2e29-4654-a91e-c1548e9fbeae-hubble-tls\") pod \"cilium-zktcq\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " pod="kube-system/cilium-zktcq" Dec 16 12:33:28.435385 kubelet[2687]: I1216 12:33:28.435370 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94b5a36f-957a-4ff2-b662-73340e1ca2e1-lib-modules\") pod \"kube-proxy-4cn59\" (UID: \"94b5a36f-957a-4ff2-b662-73340e1ca2e1\") " pod="kube-system/kube-proxy-4cn59" Dec 16 12:33:28.435441 kubelet[2687]: I1216 12:33:28.435404 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcfqc\" (UniqueName: \"kubernetes.io/projected/94b5a36f-957a-4ff2-b662-73340e1ca2e1-kube-api-access-kcfqc\") pod \"kube-proxy-4cn59\" (UID: \"94b5a36f-957a-4ff2-b662-73340e1ca2e1\") " pod="kube-system/kube-proxy-4cn59" Dec 16 12:33:28.435441 kubelet[2687]: I1216 12:33:28.435425 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-cni-path\") pod \"cilium-zktcq\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " pod="kube-system/cilium-zktcq" Dec 16 12:33:28.435484 kubelet[2687]: I1216 12:33:28.435440 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-etc-cni-netd\") pod \"cilium-zktcq\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " pod="kube-system/cilium-zktcq" Dec 16 12:33:28.435507 kubelet[2687]: I1216 12:33:28.435488 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-hostproc\") pod \"cilium-zktcq\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " pod="kube-system/cilium-zktcq" Dec 16 12:33:28.435526 kubelet[2687]: I1216 12:33:28.435513 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c05999da-2e29-4654-a91e-c1548e9fbeae-cilium-config-path\") pod \"cilium-zktcq\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " pod="kube-system/cilium-zktcq" Dec 16 12:33:28.435546 kubelet[2687]: I1216 12:33:28.435535 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvdtl\" (UniqueName: \"kubernetes.io/projected/c05999da-2e29-4654-a91e-c1548e9fbeae-kube-api-access-pvdtl\") pod \"cilium-zktcq\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " pod="kube-system/cilium-zktcq" Dec 16 12:33:28.435568 kubelet[2687]: I1216 12:33:28.435557 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94b5a36f-957a-4ff2-b662-73340e1ca2e1-xtables-lock\") pod \"kube-proxy-4cn59\" (UID: \"94b5a36f-957a-4ff2-b662-73340e1ca2e1\") " pod="kube-system/kube-proxy-4cn59" Dec 16 12:33:28.435591 kubelet[2687]: I1216 12:33:28.435574 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-bpf-maps\") pod \"cilium-zktcq\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " pod="kube-system/cilium-zktcq" Dec 16 12:33:28.648006 systemd[1]: Created slice kubepods-besteffort-poda80395be_ea47_4239_b8b1_6abd5c420fae.slice - libcontainer container kubepods-besteffort-poda80395be_ea47_4239_b8b1_6abd5c420fae.slice. Dec 16 12:33:28.661270 containerd[1525]: time="2025-12-16T12:33:28.661230003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4cn59,Uid:94b5a36f-957a-4ff2-b662-73340e1ca2e1,Namespace:kube-system,Attempt:0,}" Dec 16 12:33:28.666009 containerd[1525]: time="2025-12-16T12:33:28.665950505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zktcq,Uid:c05999da-2e29-4654-a91e-c1548e9fbeae,Namespace:kube-system,Attempt:0,}" Dec 16 12:33:28.686429 containerd[1525]: time="2025-12-16T12:33:28.686379448Z" level=info msg="connecting to shim da952ba343ac95c319ec3d77b8a38fa847041a975613cfc5d67e2636ba905280" address="unix:///run/containerd/s/27eb014144c41ab1cb9409368c602b4fd648380946c683367ce2bbef7c00e136" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:33:28.714006 systemd[1]: Started cri-containerd-da952ba343ac95c319ec3d77b8a38fa847041a975613cfc5d67e2636ba905280.scope - libcontainer container da952ba343ac95c319ec3d77b8a38fa847041a975613cfc5d67e2636ba905280. Dec 16 12:33:28.728734 containerd[1525]: time="2025-12-16T12:33:28.728674193Z" level=info msg="connecting to shim 562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489" address="unix:///run/containerd/s/6a03f045801ea26664970a31e087bc87a613d4a1fe7edf2c75929128f927f87c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:33:28.737789 kubelet[2687]: I1216 12:33:28.737742 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a80395be-ea47-4239-b8b1-6abd5c420fae-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-s9ht7\" (UID: \"a80395be-ea47-4239-b8b1-6abd5c420fae\") " pod="kube-system/cilium-operator-6c4d7847fc-s9ht7" Dec 16 12:33:28.738083 kubelet[2687]: I1216 12:33:28.737808 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hspgg\" (UniqueName: \"kubernetes.io/projected/a80395be-ea47-4239-b8b1-6abd5c420fae-kube-api-access-hspgg\") pod \"cilium-operator-6c4d7847fc-s9ht7\" (UID: \"a80395be-ea47-4239-b8b1-6abd5c420fae\") " pod="kube-system/cilium-operator-6c4d7847fc-s9ht7" Dec 16 12:33:28.744209 containerd[1525]: time="2025-12-16T12:33:28.744173754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4cn59,Uid:94b5a36f-957a-4ff2-b662-73340e1ca2e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"da952ba343ac95c319ec3d77b8a38fa847041a975613cfc5d67e2636ba905280\"" Dec 16 12:33:28.750293 containerd[1525]: time="2025-12-16T12:33:28.750256092Z" level=info msg="CreateContainer within sandbox \"da952ba343ac95c319ec3d77b8a38fa847041a975613cfc5d67e2636ba905280\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:33:28.758005 systemd[1]: Started cri-containerd-562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489.scope - libcontainer container 562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489. Dec 16 12:33:28.760213 containerd[1525]: time="2025-12-16T12:33:28.760169535Z" level=info msg="Container 7e062277de2f2861d4d3482b4340452e2882a71df47ca638049a24bf771f46c8: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:28.769040 containerd[1525]: time="2025-12-16T12:33:28.768975985Z" level=info msg="CreateContainer within sandbox \"da952ba343ac95c319ec3d77b8a38fa847041a975613cfc5d67e2636ba905280\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7e062277de2f2861d4d3482b4340452e2882a71df47ca638049a24bf771f46c8\"" Dec 16 12:33:28.771093 containerd[1525]: time="2025-12-16T12:33:28.769648190Z" level=info msg="StartContainer for \"7e062277de2f2861d4d3482b4340452e2882a71df47ca638049a24bf771f46c8\"" Dec 16 12:33:28.771843 containerd[1525]: time="2025-12-16T12:33:28.771810853Z" level=info msg="connecting to shim 7e062277de2f2861d4d3482b4340452e2882a71df47ca638049a24bf771f46c8" address="unix:///run/containerd/s/27eb014144c41ab1cb9409368c602b4fd648380946c683367ce2bbef7c00e136" protocol=ttrpc version=3 Dec 16 12:33:28.798031 systemd[1]: Started cri-containerd-7e062277de2f2861d4d3482b4340452e2882a71df47ca638049a24bf771f46c8.scope - libcontainer container 7e062277de2f2861d4d3482b4340452e2882a71df47ca638049a24bf771f46c8. Dec 16 12:33:28.803795 containerd[1525]: time="2025-12-16T12:33:28.803275538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zktcq,Uid:c05999da-2e29-4654-a91e-c1548e9fbeae,Namespace:kube-system,Attempt:0,} returns sandbox id \"562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489\"" Dec 16 12:33:28.805188 containerd[1525]: time="2025-12-16T12:33:28.805146944Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 16 12:33:28.944720 containerd[1525]: time="2025-12-16T12:33:28.944260127Z" level=info msg="StartContainer for \"7e062277de2f2861d4d3482b4340452e2882a71df47ca638049a24bf771f46c8\" returns successfully" Dec 16 12:33:28.954609 containerd[1525]: time="2025-12-16T12:33:28.954541207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s9ht7,Uid:a80395be-ea47-4239-b8b1-6abd5c420fae,Namespace:kube-system,Attempt:0,}" Dec 16 12:33:29.150483 containerd[1525]: time="2025-12-16T12:33:29.150439435Z" level=info msg="connecting to shim fd059ed062b7245aa6d249f90217936b495c7a103b3bb2347481568c03f21779" address="unix:///run/containerd/s/bf88425bed436b9c79ea4d88ade0490603cad4b5d0854253a48042ee76f1c9b8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:33:29.178025 systemd[1]: Started cri-containerd-fd059ed062b7245aa6d249f90217936b495c7a103b3bb2347481568c03f21779.scope - libcontainer container fd059ed062b7245aa6d249f90217936b495c7a103b3bb2347481568c03f21779. Dec 16 12:33:29.217428 containerd[1525]: time="2025-12-16T12:33:29.217290325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s9ht7,Uid:a80395be-ea47-4239-b8b1-6abd5c420fae,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd059ed062b7245aa6d249f90217936b495c7a103b3bb2347481568c03f21779\"" Dec 16 12:33:29.957483 kubelet[2687]: I1216 12:33:29.957417 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4cn59" podStartSLOduration=1.9574004569999999 podStartE2EDuration="1.957400457s" podCreationTimestamp="2025-12-16 12:33:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:33:29.957349094 +0000 UTC m=+8.161374140" watchObservedRunningTime="2025-12-16 12:33:29.957400457 +0000 UTC m=+8.161425503" Dec 16 12:33:36.041189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3650359589.mount: Deactivated successfully. Dec 16 12:33:36.449117 update_engine[1519]: I20251216 12:33:36.449049 1519 update_attempter.cc:509] Updating boot flags... Dec 16 12:33:38.390196 containerd[1525]: time="2025-12-16T12:33:38.390118378Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:38.390860 containerd[1525]: time="2025-12-16T12:33:38.390814658Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Dec 16 12:33:38.392122 containerd[1525]: time="2025-12-16T12:33:38.392062676Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:38.394249 containerd[1525]: time="2025-12-16T12:33:38.394207575Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.589001752s" Dec 16 12:33:38.394249 containerd[1525]: time="2025-12-16T12:33:38.394247599Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 16 12:33:38.399395 containerd[1525]: time="2025-12-16T12:33:38.399355466Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 16 12:33:38.407669 containerd[1525]: time="2025-12-16T12:33:38.407616907Z" level=info msg="CreateContainer within sandbox \"562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 12:33:38.451380 containerd[1525]: time="2025-12-16T12:33:38.451337700Z" level=info msg="Container 9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:38.555063 containerd[1525]: time="2025-12-16T12:33:38.555003646Z" level=info msg="CreateContainer within sandbox \"562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045\"" Dec 16 12:33:38.555780 containerd[1525]: time="2025-12-16T12:33:38.555752385Z" level=info msg="StartContainer for \"9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045\"" Dec 16 12:33:38.558268 containerd[1525]: time="2025-12-16T12:33:38.558242145Z" level=info msg="connecting to shim 9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045" address="unix:///run/containerd/s/6a03f045801ea26664970a31e087bc87a613d4a1fe7edf2c75929128f927f87c" protocol=ttrpc version=3 Dec 16 12:33:38.601072 systemd[1]: Started cri-containerd-9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045.scope - libcontainer container 9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045. Dec 16 12:33:38.662904 systemd[1]: cri-containerd-9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045.scope: Deactivated successfully. Dec 16 12:33:38.665927 containerd[1525]: time="2025-12-16T12:33:38.665851627Z" level=info msg="StartContainer for \"9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045\" returns successfully" Dec 16 12:33:38.707794 containerd[1525]: time="2025-12-16T12:33:38.707714487Z" level=info msg="received container exit event container_id:\"9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045\" id:\"9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045\" pid:3133 exited_at:{seconds:1765888418 nanos:697867083}" Dec 16 12:33:38.754687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045-rootfs.mount: Deactivated successfully. Dec 16 12:33:38.974661 containerd[1525]: time="2025-12-16T12:33:38.974510687Z" level=info msg="CreateContainer within sandbox \"562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 12:33:38.993490 containerd[1525]: time="2025-12-16T12:33:38.993422208Z" level=info msg="Container c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:39.000724 containerd[1525]: time="2025-12-16T12:33:39.000394327Z" level=info msg="CreateContainer within sandbox \"562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3\"" Dec 16 12:33:39.001579 containerd[1525]: time="2025-12-16T12:33:39.001411014Z" level=info msg="StartContainer for \"c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3\"" Dec 16 12:33:39.002627 containerd[1525]: time="2025-12-16T12:33:39.002596807Z" level=info msg="connecting to shim c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3" address="unix:///run/containerd/s/6a03f045801ea26664970a31e087bc87a613d4a1fe7edf2c75929128f927f87c" protocol=ttrpc version=3 Dec 16 12:33:39.025977 systemd[1]: Started cri-containerd-c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3.scope - libcontainer container c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3. Dec 16 12:33:39.054037 containerd[1525]: time="2025-12-16T12:33:39.053999685Z" level=info msg="StartContainer for \"c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3\" returns successfully" Dec 16 12:33:39.067364 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:33:39.067579 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:33:39.067818 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:33:39.070412 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:33:39.070886 systemd[1]: cri-containerd-c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3.scope: Deactivated successfully. Dec 16 12:33:39.071348 containerd[1525]: time="2025-12-16T12:33:39.071314563Z" level=info msg="received container exit event container_id:\"c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3\" id:\"c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3\" pid:3177 exited_at:{seconds:1765888419 nanos:70756333}" Dec 16 12:33:39.093000 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:33:39.985351 containerd[1525]: time="2025-12-16T12:33:39.985297923Z" level=info msg="CreateContainer within sandbox \"562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 12:33:40.005159 containerd[1525]: time="2025-12-16T12:33:40.005118016Z" level=info msg="Container b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:40.010606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1476620297.mount: Deactivated successfully. Dec 16 12:33:40.052912 containerd[1525]: time="2025-12-16T12:33:40.051658181Z" level=info msg="CreateContainer within sandbox \"562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4\"" Dec 16 12:33:40.053710 containerd[1525]: time="2025-12-16T12:33:40.053667271Z" level=info msg="StartContainer for \"b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4\"" Dec 16 12:33:40.056425 containerd[1525]: time="2025-12-16T12:33:40.056333690Z" level=info msg="connecting to shim b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4" address="unix:///run/containerd/s/6a03f045801ea26664970a31e087bc87a613d4a1fe7edf2c75929128f927f87c" protocol=ttrpc version=3 Dec 16 12:33:40.081019 systemd[1]: Started cri-containerd-b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4.scope - libcontainer container b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4. Dec 16 12:33:40.172830 containerd[1525]: time="2025-12-16T12:33:40.172754698Z" level=info msg="StartContainer for \"b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4\" returns successfully" Dec 16 12:33:40.175431 systemd[1]: cri-containerd-b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4.scope: Deactivated successfully. Dec 16 12:33:40.178517 containerd[1525]: time="2025-12-16T12:33:40.178243760Z" level=info msg="received container exit event container_id:\"b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4\" id:\"b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4\" pid:3228 exited_at:{seconds:1765888420 nanos:177971376}" Dec 16 12:33:40.985678 containerd[1525]: time="2025-12-16T12:33:40.985226510Z" level=info msg="CreateContainer within sandbox \"562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 12:33:41.032686 containerd[1525]: time="2025-12-16T12:33:41.032645548Z" level=info msg="Container aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:41.033342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2246117654.mount: Deactivated successfully. Dec 16 12:33:41.044056 containerd[1525]: time="2025-12-16T12:33:41.043949286Z" level=info msg="CreateContainer within sandbox \"562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c\"" Dec 16 12:33:41.044646 containerd[1525]: time="2025-12-16T12:33:41.044620344Z" level=info msg="StartContainer for \"aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c\"" Dec 16 12:33:41.045939 containerd[1525]: time="2025-12-16T12:33:41.045884805Z" level=info msg="connecting to shim aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c" address="unix:///run/containerd/s/6a03f045801ea26664970a31e087bc87a613d4a1fe7edf2c75929128f927f87c" protocol=ttrpc version=3 Dec 16 12:33:41.075020 systemd[1]: Started cri-containerd-aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c.scope - libcontainer container aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c. Dec 16 12:33:41.106546 systemd[1]: cri-containerd-aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c.scope: Deactivated successfully. Dec 16 12:33:41.108892 containerd[1525]: time="2025-12-16T12:33:41.108852680Z" level=info msg="received container exit event container_id:\"aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c\" id:\"aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c\" pid:3266 exited_at:{seconds:1765888421 nanos:107808905}" Dec 16 12:33:41.123095 containerd[1525]: time="2025-12-16T12:33:41.123049460Z" level=info msg="StartContainer for \"aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c\" returns successfully" Dec 16 12:33:41.451628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c-rootfs.mount: Deactivated successfully. Dec 16 12:33:41.947689 containerd[1525]: time="2025-12-16T12:33:41.947635599Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:41.948226 containerd[1525]: time="2025-12-16T12:33:41.948198732Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Dec 16 12:33:41.951148 containerd[1525]: time="2025-12-16T12:33:41.951096893Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:41.952544 containerd[1525]: time="2025-12-16T12:33:41.952403820Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.55298038s" Dec 16 12:33:41.952544 containerd[1525]: time="2025-12-16T12:33:41.952449565Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 16 12:33:41.959890 containerd[1525]: time="2025-12-16T12:33:41.959836200Z" level=info msg="CreateContainer within sandbox \"fd059ed062b7245aa6d249f90217936b495c7a103b3bb2347481568c03f21779\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 16 12:33:41.980992 containerd[1525]: time="2025-12-16T12:33:41.980941733Z" level=info msg="Container 665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:41.984896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1704517286.mount: Deactivated successfully. Dec 16 12:33:42.000187 containerd[1525]: time="2025-12-16T12:33:41.999680969Z" level=info msg="CreateContainer within sandbox \"fd059ed062b7245aa6d249f90217936b495c7a103b3bb2347481568c03f21779\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e\"" Dec 16 12:33:42.001941 containerd[1525]: time="2025-12-16T12:33:42.000726823Z" level=info msg="CreateContainer within sandbox \"562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 12:33:42.003401 containerd[1525]: time="2025-12-16T12:33:42.003363004Z" level=info msg="StartContainer for \"665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e\"" Dec 16 12:33:42.004259 containerd[1525]: time="2025-12-16T12:33:42.004231814Z" level=info msg="connecting to shim 665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e" address="unix:///run/containerd/s/bf88425bed436b9c79ea4d88ade0490603cad4b5d0854253a48042ee76f1c9b8" protocol=ttrpc version=3 Dec 16 12:33:42.030012 systemd[1]: Started cri-containerd-665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e.scope - libcontainer container 665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e. Dec 16 12:33:42.063997 containerd[1525]: time="2025-12-16T12:33:42.063956398Z" level=info msg="StartContainer for \"665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e\" returns successfully" Dec 16 12:33:42.081813 containerd[1525]: time="2025-12-16T12:33:42.081669741Z" level=info msg="Container b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:42.090703 containerd[1525]: time="2025-12-16T12:33:42.090652553Z" level=info msg="CreateContainer within sandbox \"562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4\"" Dec 16 12:33:42.091463 containerd[1525]: time="2025-12-16T12:33:42.091432911Z" level=info msg="StartContainer for \"b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4\"" Dec 16 12:33:42.092965 containerd[1525]: time="2025-12-16T12:33:42.092834676Z" level=info msg="connecting to shim b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4" address="unix:///run/containerd/s/6a03f045801ea26664970a31e087bc87a613d4a1fe7edf2c75929128f927f87c" protocol=ttrpc version=3 Dec 16 12:33:42.122995 systemd[1]: Started cri-containerd-b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4.scope - libcontainer container b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4. Dec 16 12:33:42.188029 containerd[1525]: time="2025-12-16T12:33:42.187888335Z" level=info msg="StartContainer for \"b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4\" returns successfully" Dec 16 12:33:42.383205 kubelet[2687]: I1216 12:33:42.383140 2687 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:33:42.437363 systemd[1]: Created slice kubepods-burstable-podcd721383_8bfe_45af_b3d9_964b0ee6eaee.slice - libcontainer container kubepods-burstable-podcd721383_8bfe_45af_b3d9_964b0ee6eaee.slice. Dec 16 12:33:42.446731 systemd[1]: Created slice kubepods-burstable-podbb36f268_4592_408f_b970_e7a77fce071a.slice - libcontainer container kubepods-burstable-podbb36f268_4592_408f_b970_e7a77fce071a.slice. Dec 16 12:33:42.519982 kubelet[2687]: I1216 12:33:42.519925 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52gw2\" (UniqueName: \"kubernetes.io/projected/cd721383-8bfe-45af-b3d9-964b0ee6eaee-kube-api-access-52gw2\") pod \"coredns-674b8bbfcf-927xh\" (UID: \"cd721383-8bfe-45af-b3d9-964b0ee6eaee\") " pod="kube-system/coredns-674b8bbfcf-927xh" Dec 16 12:33:42.519982 kubelet[2687]: I1216 12:33:42.519980 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7xq7\" (UniqueName: \"kubernetes.io/projected/bb36f268-4592-408f-b970-e7a77fce071a-kube-api-access-f7xq7\") pod \"coredns-674b8bbfcf-nrwl7\" (UID: \"bb36f268-4592-408f-b970-e7a77fce071a\") " pod="kube-system/coredns-674b8bbfcf-nrwl7" Dec 16 12:33:42.520139 kubelet[2687]: I1216 12:33:42.520011 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb36f268-4592-408f-b970-e7a77fce071a-config-volume\") pod \"coredns-674b8bbfcf-nrwl7\" (UID: \"bb36f268-4592-408f-b970-e7a77fce071a\") " pod="kube-system/coredns-674b8bbfcf-nrwl7" Dec 16 12:33:42.520139 kubelet[2687]: I1216 12:33:42.520041 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd721383-8bfe-45af-b3d9-964b0ee6eaee-config-volume\") pod \"coredns-674b8bbfcf-927xh\" (UID: \"cd721383-8bfe-45af-b3d9-964b0ee6eaee\") " pod="kube-system/coredns-674b8bbfcf-927xh" Dec 16 12:33:42.745942 containerd[1525]: time="2025-12-16T12:33:42.745381715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-927xh,Uid:cd721383-8bfe-45af-b3d9-964b0ee6eaee,Namespace:kube-system,Attempt:0,}" Dec 16 12:33:42.755017 containerd[1525]: time="2025-12-16T12:33:42.754966060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nrwl7,Uid:bb36f268-4592-408f-b970-e7a77fce071a,Namespace:kube-system,Attempt:0,}" Dec 16 12:33:43.045209 kubelet[2687]: I1216 12:33:43.044948 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zktcq" podStartSLOduration=5.450420947 podStartE2EDuration="15.044932705s" podCreationTimestamp="2025-12-16 12:33:28 +0000 UTC" firstStartedPulling="2025-12-16 12:33:28.804700486 +0000 UTC m=+7.008725532" lastFinishedPulling="2025-12-16 12:33:38.399212244 +0000 UTC m=+16.603237290" observedRunningTime="2025-12-16 12:33:43.028103081 +0000 UTC m=+21.232128127" watchObservedRunningTime="2025-12-16 12:33:43.044932705 +0000 UTC m=+21.248957751" Dec 16 12:33:46.444134 systemd-networkd[1443]: cilium_host: Link UP Dec 16 12:33:46.444247 systemd-networkd[1443]: cilium_net: Link UP Dec 16 12:33:46.444357 systemd-networkd[1443]: cilium_net: Gained carrier Dec 16 12:33:46.445600 systemd-networkd[1443]: cilium_host: Gained carrier Dec 16 12:33:46.534635 systemd-networkd[1443]: cilium_vxlan: Link UP Dec 16 12:33:46.534648 systemd-networkd[1443]: cilium_vxlan: Gained carrier Dec 16 12:33:46.836836 kernel: NET: Registered PF_ALG protocol family Dec 16 12:33:47.176907 systemd-networkd[1443]: cilium_host: Gained IPv6LL Dec 16 12:33:47.306051 systemd-networkd[1443]: cilium_net: Gained IPv6LL Dec 16 12:33:47.535355 systemd-networkd[1443]: lxc_health: Link UP Dec 16 12:33:47.535592 systemd-networkd[1443]: lxc_health: Gained carrier Dec 16 12:33:47.832846 kernel: eth0: renamed from tmp0a25b Dec 16 12:33:47.835258 kernel: eth0: renamed from tmp2a82d Dec 16 12:33:47.834741 systemd-networkd[1443]: lxc775e8648b1c3: Link UP Dec 16 12:33:47.835560 systemd-networkd[1443]: lxc5d7bb9d40ade: Link UP Dec 16 12:33:47.836478 systemd-networkd[1443]: lxc5d7bb9d40ade: Gained carrier Dec 16 12:33:47.836635 systemd-networkd[1443]: lxc775e8648b1c3: Gained carrier Dec 16 12:33:48.584964 systemd-networkd[1443]: lxc_health: Gained IPv6LL Dec 16 12:33:48.585262 systemd-networkd[1443]: cilium_vxlan: Gained IPv6LL Dec 16 12:33:48.702294 kubelet[2687]: I1216 12:33:48.702220 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-s9ht7" podStartSLOduration=7.970700703 podStartE2EDuration="20.702198986s" podCreationTimestamp="2025-12-16 12:33:28 +0000 UTC" firstStartedPulling="2025-12-16 12:33:29.221706432 +0000 UTC m=+7.425731438" lastFinishedPulling="2025-12-16 12:33:41.953204675 +0000 UTC m=+20.157229721" observedRunningTime="2025-12-16 12:33:43.044877001 +0000 UTC m=+21.248902047" watchObservedRunningTime="2025-12-16 12:33:48.702198986 +0000 UTC m=+26.906224032" Dec 16 12:33:49.032946 systemd-networkd[1443]: lxc775e8648b1c3: Gained IPv6LL Dec 16 12:33:49.162328 systemd-networkd[1443]: lxc5d7bb9d40ade: Gained IPv6LL Dec 16 12:33:50.425094 systemd[1]: Started sshd@7-10.0.0.83:22-10.0.0.1:42338.service - OpenSSH per-connection server daemon (10.0.0.1:42338). Dec 16 12:33:50.483875 sshd[3866]: Accepted publickey for core from 10.0.0.1 port 42338 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:33:50.485211 sshd-session[3866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:50.491120 systemd-logind[1509]: New session 8 of user core. Dec 16 12:33:50.498977 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:33:50.635705 sshd[3871]: Connection closed by 10.0.0.1 port 42338 Dec 16 12:33:50.636057 sshd-session[3866]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:50.639764 systemd-logind[1509]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:33:50.640464 systemd[1]: sshd@7-10.0.0.83:22-10.0.0.1:42338.service: Deactivated successfully. Dec 16 12:33:50.642489 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:33:50.644875 systemd-logind[1509]: Removed session 8. Dec 16 12:33:51.805601 containerd[1525]: time="2025-12-16T12:33:51.805512689Z" level=info msg="connecting to shim 0a25b7d3e2f91352ec5ef228501d9d8da8b9a4dc45e23960ce517c1d35aabbcc" address="unix:///run/containerd/s/ea4fec24b063b5c41067939de5865e54ca7161287382da2cf7ae875765c0944a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:33:51.807449 containerd[1525]: time="2025-12-16T12:33:51.807375965Z" level=info msg="connecting to shim 2a82d678ec5c9bd06e54778313785302f0aa5c7c9a7a53bffb52eab7640adb6f" address="unix:///run/containerd/s/d1c6b810c2195282f52db7615c9fcd51a55c97df69148aef224252f9bf7e0cbb" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:33:51.840022 systemd[1]: Started cri-containerd-0a25b7d3e2f91352ec5ef228501d9d8da8b9a4dc45e23960ce517c1d35aabbcc.scope - libcontainer container 0a25b7d3e2f91352ec5ef228501d9d8da8b9a4dc45e23960ce517c1d35aabbcc. Dec 16 12:33:51.843967 systemd[1]: Started cri-containerd-2a82d678ec5c9bd06e54778313785302f0aa5c7c9a7a53bffb52eab7640adb6f.scope - libcontainer container 2a82d678ec5c9bd06e54778313785302f0aa5c7c9a7a53bffb52eab7640adb6f. Dec 16 12:33:51.859550 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:33:51.861039 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:33:51.884167 containerd[1525]: time="2025-12-16T12:33:51.883617010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-927xh,Uid:cd721383-8bfe-45af-b3d9-964b0ee6eaee,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a82d678ec5c9bd06e54778313785302f0aa5c7c9a7a53bffb52eab7640adb6f\"" Dec 16 12:33:51.893948 containerd[1525]: time="2025-12-16T12:33:51.893890067Z" level=info msg="CreateContainer within sandbox \"2a82d678ec5c9bd06e54778313785302f0aa5c7c9a7a53bffb52eab7640adb6f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:33:51.900911 containerd[1525]: time="2025-12-16T12:33:51.900862056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nrwl7,Uid:bb36f268-4592-408f-b970-e7a77fce071a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a25b7d3e2f91352ec5ef228501d9d8da8b9a4dc45e23960ce517c1d35aabbcc\"" Dec 16 12:33:51.905683 containerd[1525]: time="2025-12-16T12:33:51.905642626Z" level=info msg="CreateContainer within sandbox \"0a25b7d3e2f91352ec5ef228501d9d8da8b9a4dc45e23960ce517c1d35aabbcc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:33:51.911973 containerd[1525]: time="2025-12-16T12:33:51.911921256Z" level=info msg="Container e6e985fb774e5451c266cb27b6c4b495ab581f7f660b7a2e9e965bb2097e87d2: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:51.916300 containerd[1525]: time="2025-12-16T12:33:51.916259983Z" level=info msg="Container 9a9ced34ac4d381cccedfba69a517bae22e81c7241bb41b305a8ed7e82d4f1e4: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:51.921618 containerd[1525]: time="2025-12-16T12:33:51.921500674Z" level=info msg="CreateContainer within sandbox \"2a82d678ec5c9bd06e54778313785302f0aa5c7c9a7a53bffb52eab7640adb6f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e6e985fb774e5451c266cb27b6c4b495ab581f7f660b7a2e9e965bb2097e87d2\"" Dec 16 12:33:51.922889 containerd[1525]: time="2025-12-16T12:33:51.922488182Z" level=info msg="StartContainer for \"e6e985fb774e5451c266cb27b6c4b495ab581f7f660b7a2e9e965bb2097e87d2\"" Dec 16 12:33:51.923671 containerd[1525]: time="2025-12-16T12:33:51.923631144Z" level=info msg="connecting to shim e6e985fb774e5451c266cb27b6c4b495ab581f7f660b7a2e9e965bb2097e87d2" address="unix:///run/containerd/s/d1c6b810c2195282f52db7615c9fcd51a55c97df69148aef224252f9bf7e0cbb" protocol=ttrpc version=3 Dec 16 12:33:51.933232 containerd[1525]: time="2025-12-16T12:33:51.933106979Z" level=info msg="CreateContainer within sandbox \"0a25b7d3e2f91352ec5ef228501d9d8da8b9a4dc45e23960ce517c1d35aabbcc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a9ced34ac4d381cccedfba69a517bae22e81c7241bb41b305a8ed7e82d4f1e4\"" Dec 16 12:33:51.936786 containerd[1525]: time="2025-12-16T12:33:51.936210880Z" level=info msg="StartContainer for \"9a9ced34ac4d381cccedfba69a517bae22e81c7241bb41b305a8ed7e82d4f1e4\"" Dec 16 12:33:51.937482 containerd[1525]: time="2025-12-16T12:33:51.937401153Z" level=info msg="connecting to shim 9a9ced34ac4d381cccedfba69a517bae22e81c7241bb41b305a8ed7e82d4f1e4" address="unix:///run/containerd/s/ea4fec24b063b5c41067939de5865e54ca7161287382da2cf7ae875765c0944a" protocol=ttrpc version=3 Dec 16 12:33:51.950005 systemd[1]: Started cri-containerd-e6e985fb774e5451c266cb27b6c4b495ab581f7f660b7a2e9e965bb2097e87d2.scope - libcontainer container e6e985fb774e5451c266cb27b6c4b495ab581f7f660b7a2e9e965bb2097e87d2. Dec 16 12:33:51.968053 systemd[1]: Started cri-containerd-9a9ced34ac4d381cccedfba69a517bae22e81c7241bb41b305a8ed7e82d4f1e4.scope - libcontainer container 9a9ced34ac4d381cccedfba69a517bae22e81c7241bb41b305a8ed7e82d4f1e4. Dec 16 12:33:51.995434 containerd[1525]: time="2025-12-16T12:33:51.995383408Z" level=info msg="StartContainer for \"e6e985fb774e5451c266cb27b6c4b495ab581f7f660b7a2e9e965bb2097e87d2\" returns successfully" Dec 16 12:33:52.029480 containerd[1525]: time="2025-12-16T12:33:52.029381805Z" level=info msg="StartContainer for \"9a9ced34ac4d381cccedfba69a517bae22e81c7241bb41b305a8ed7e82d4f1e4\" returns successfully" Dec 16 12:33:52.078390 kubelet[2687]: I1216 12:33:52.078326 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-927xh" podStartSLOduration=24.078307683 podStartE2EDuration="24.078307683s" podCreationTimestamp="2025-12-16 12:33:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:33:52.058943114 +0000 UTC m=+30.262968160" watchObservedRunningTime="2025-12-16 12:33:52.078307683 +0000 UTC m=+30.282332729" Dec 16 12:33:52.078887 kubelet[2687]: I1216 12:33:52.078633 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nrwl7" podStartSLOduration=24.078626351 podStartE2EDuration="24.078626351s" podCreationTimestamp="2025-12-16 12:33:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:33:52.078159267 +0000 UTC m=+30.282184273" watchObservedRunningTime="2025-12-16 12:33:52.078626351 +0000 UTC m=+30.282651357" Dec 16 12:33:52.785927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2278543953.mount: Deactivated successfully. Dec 16 12:33:55.651371 systemd[1]: Started sshd@8-10.0.0.83:22-10.0.0.1:46102.service - OpenSSH per-connection server daemon (10.0.0.1:46102). Dec 16 12:33:55.709945 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 46102 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:33:55.711343 sshd-session[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:55.715723 systemd-logind[1509]: New session 9 of user core. Dec 16 12:33:55.724869 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:33:55.852582 sshd[4061]: Connection closed by 10.0.0.1 port 46102 Dec 16 12:33:55.852941 sshd-session[4058]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:55.856995 systemd[1]: sshd@8-10.0.0.83:22-10.0.0.1:46102.service: Deactivated successfully. Dec 16 12:33:55.858909 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:33:55.859679 systemd-logind[1509]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:33:55.863072 systemd-logind[1509]: Removed session 9. Dec 16 12:34:00.869604 systemd[1]: Started sshd@9-10.0.0.83:22-10.0.0.1:46128.service - OpenSSH per-connection server daemon (10.0.0.1:46128). Dec 16 12:34:00.949284 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 46128 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:00.950955 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:00.961995 systemd-logind[1509]: New session 10 of user core. Dec 16 12:34:00.971050 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:34:01.116055 sshd[4082]: Connection closed by 10.0.0.1 port 46128 Dec 16 12:34:01.116744 sshd-session[4079]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:01.120313 systemd[1]: sshd@9-10.0.0.83:22-10.0.0.1:46128.service: Deactivated successfully. Dec 16 12:34:01.122390 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:34:01.124272 systemd-logind[1509]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:34:01.126383 systemd-logind[1509]: Removed session 10. Dec 16 12:34:06.131882 systemd[1]: Started sshd@10-10.0.0.83:22-10.0.0.1:59654.service - OpenSSH per-connection server daemon (10.0.0.1:59654). Dec 16 12:34:06.190113 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 59654 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:06.192673 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:06.200814 systemd-logind[1509]: New session 11 of user core. Dec 16 12:34:06.208008 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:34:06.342595 sshd[4101]: Connection closed by 10.0.0.1 port 59654 Dec 16 12:34:06.344852 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:06.355639 systemd[1]: sshd@10-10.0.0.83:22-10.0.0.1:59654.service: Deactivated successfully. Dec 16 12:34:06.359346 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:34:06.360742 systemd-logind[1509]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:34:06.363762 systemd[1]: Started sshd@11-10.0.0.83:22-10.0.0.1:59660.service - OpenSSH per-connection server daemon (10.0.0.1:59660). Dec 16 12:34:06.365561 systemd-logind[1509]: Removed session 11. Dec 16 12:34:06.438552 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 59660 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:06.440478 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:06.445753 systemd-logind[1509]: New session 12 of user core. Dec 16 12:34:06.462020 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:34:06.628746 sshd[4118]: Connection closed by 10.0.0.1 port 59660 Dec 16 12:34:06.629239 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:06.641356 systemd[1]: sshd@11-10.0.0.83:22-10.0.0.1:59660.service: Deactivated successfully. Dec 16 12:34:06.644159 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:34:06.650616 systemd-logind[1509]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:34:06.657260 systemd[1]: Started sshd@12-10.0.0.83:22-10.0.0.1:59674.service - OpenSSH per-connection server daemon (10.0.0.1:59674). Dec 16 12:34:06.660087 systemd-logind[1509]: Removed session 12. Dec 16 12:34:06.714116 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 59674 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:06.715463 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:06.719519 systemd-logind[1509]: New session 13 of user core. Dec 16 12:34:06.742016 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:34:06.872911 sshd[4132]: Connection closed by 10.0.0.1 port 59674 Dec 16 12:34:06.873346 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:06.882274 systemd-logind[1509]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:34:06.882498 systemd[1]: sshd@12-10.0.0.83:22-10.0.0.1:59674.service: Deactivated successfully. Dec 16 12:34:06.884165 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:34:06.885394 systemd-logind[1509]: Removed session 13. Dec 16 12:34:11.891043 systemd[1]: Started sshd@13-10.0.0.83:22-10.0.0.1:40952.service - OpenSSH per-connection server daemon (10.0.0.1:40952). Dec 16 12:34:11.949835 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 40952 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:11.951219 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:11.958253 systemd-logind[1509]: New session 14 of user core. Dec 16 12:34:11.973077 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:34:12.103502 sshd[4149]: Connection closed by 10.0.0.1 port 40952 Dec 16 12:34:12.103985 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:12.108360 systemd[1]: sshd@13-10.0.0.83:22-10.0.0.1:40952.service: Deactivated successfully. Dec 16 12:34:12.111221 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:34:12.112970 systemd-logind[1509]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:34:12.114791 systemd-logind[1509]: Removed session 14. Dec 16 12:34:17.131965 systemd[1]: Started sshd@14-10.0.0.83:22-10.0.0.1:41040.service - OpenSSH per-connection server daemon (10.0.0.1:41040). Dec 16 12:34:17.201361 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 41040 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:17.202947 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:17.207205 systemd-logind[1509]: New session 15 of user core. Dec 16 12:34:17.216095 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:34:17.379186 sshd[4167]: Connection closed by 10.0.0.1 port 41040 Dec 16 12:34:17.379479 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:17.395717 systemd[1]: sshd@14-10.0.0.83:22-10.0.0.1:41040.service: Deactivated successfully. Dec 16 12:34:17.401132 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:34:17.402529 systemd-logind[1509]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:34:17.411506 systemd[1]: Started sshd@15-10.0.0.83:22-10.0.0.1:41048.service - OpenSSH per-connection server daemon (10.0.0.1:41048). Dec 16 12:34:17.412225 systemd-logind[1509]: Removed session 15. Dec 16 12:34:17.481864 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 41048 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:17.483471 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:17.490721 systemd-logind[1509]: New session 16 of user core. Dec 16 12:34:17.500053 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:34:17.721737 sshd[4183]: Connection closed by 10.0.0.1 port 41048 Dec 16 12:34:17.722196 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:17.732322 systemd[1]: sshd@15-10.0.0.83:22-10.0.0.1:41048.service: Deactivated successfully. Dec 16 12:34:17.734431 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:34:17.737592 systemd-logind[1509]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:34:17.743163 systemd[1]: Started sshd@16-10.0.0.83:22-10.0.0.1:41050.service - OpenSSH per-connection server daemon (10.0.0.1:41050). Dec 16 12:34:17.752732 systemd-logind[1509]: Removed session 16. Dec 16 12:34:17.820005 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 41050 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:17.821442 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:17.826205 systemd-logind[1509]: New session 17 of user core. Dec 16 12:34:17.847056 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:34:18.440431 sshd[4198]: Connection closed by 10.0.0.1 port 41050 Dec 16 12:34:18.441010 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:18.454563 systemd[1]: sshd@16-10.0.0.83:22-10.0.0.1:41050.service: Deactivated successfully. Dec 16 12:34:18.459729 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:34:18.462586 systemd-logind[1509]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:34:18.471117 systemd[1]: Started sshd@17-10.0.0.83:22-10.0.0.1:41056.service - OpenSSH per-connection server daemon (10.0.0.1:41056). Dec 16 12:34:18.471903 systemd-logind[1509]: Removed session 17. Dec 16 12:34:18.524326 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 41056 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:18.525823 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:18.531246 systemd-logind[1509]: New session 18 of user core. Dec 16 12:34:18.548020 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:34:18.787829 sshd[4220]: Connection closed by 10.0.0.1 port 41056 Dec 16 12:34:18.788407 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:18.802620 systemd[1]: sshd@17-10.0.0.83:22-10.0.0.1:41056.service: Deactivated successfully. Dec 16 12:34:18.805447 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:34:18.806735 systemd-logind[1509]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:34:18.811665 systemd[1]: Started sshd@18-10.0.0.83:22-10.0.0.1:41072.service - OpenSSH per-connection server daemon (10.0.0.1:41072). Dec 16 12:34:18.812836 systemd-logind[1509]: Removed session 18. Dec 16 12:34:18.869062 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 41072 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:18.871007 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:18.877796 systemd-logind[1509]: New session 19 of user core. Dec 16 12:34:18.881037 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:34:18.999368 sshd[4234]: Connection closed by 10.0.0.1 port 41072 Dec 16 12:34:18.999799 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:19.003842 systemd[1]: sshd@18-10.0.0.83:22-10.0.0.1:41072.service: Deactivated successfully. Dec 16 12:34:19.005852 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:34:19.008498 systemd-logind[1509]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:34:19.010392 systemd-logind[1509]: Removed session 19. Dec 16 12:34:24.019457 systemd[1]: Started sshd@19-10.0.0.83:22-10.0.0.1:36472.service - OpenSSH per-connection server daemon (10.0.0.1:36472). Dec 16 12:34:24.084375 sshd[4253]: Accepted publickey for core from 10.0.0.1 port 36472 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:24.086617 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:24.092118 systemd-logind[1509]: New session 20 of user core. Dec 16 12:34:24.104036 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:34:24.227427 sshd[4256]: Connection closed by 10.0.0.1 port 36472 Dec 16 12:34:24.227850 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:24.231638 systemd[1]: sshd@19-10.0.0.83:22-10.0.0.1:36472.service: Deactivated successfully. Dec 16 12:34:24.233510 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:34:24.236910 systemd-logind[1509]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:34:24.237895 systemd-logind[1509]: Removed session 20. Dec 16 12:34:29.240582 systemd[1]: Started sshd@20-10.0.0.83:22-10.0.0.1:36644.service - OpenSSH per-connection server daemon (10.0.0.1:36644). Dec 16 12:34:29.306523 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 36644 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:29.308354 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:29.313015 systemd-logind[1509]: New session 21 of user core. Dec 16 12:34:29.325029 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:34:29.443919 sshd[4274]: Connection closed by 10.0.0.1 port 36644 Dec 16 12:34:29.444316 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:29.455348 systemd[1]: sshd@20-10.0.0.83:22-10.0.0.1:36644.service: Deactivated successfully. Dec 16 12:34:29.457686 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:34:29.458454 systemd-logind[1509]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:34:29.461169 systemd[1]: Started sshd@21-10.0.0.83:22-10.0.0.1:36660.service - OpenSSH per-connection server daemon (10.0.0.1:36660). Dec 16 12:34:29.462463 systemd-logind[1509]: Removed session 21. Dec 16 12:34:29.523847 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 36660 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:29.525358 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:29.530340 systemd-logind[1509]: New session 22 of user core. Dec 16 12:34:29.544015 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:34:31.396185 containerd[1525]: time="2025-12-16T12:34:31.396048171Z" level=info msg="StopContainer for \"665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e\" with timeout 30 (s)" Dec 16 12:34:31.410131 containerd[1525]: time="2025-12-16T12:34:31.410017372Z" level=info msg="Stop container \"665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e\" with signal terminated" Dec 16 12:34:31.421384 systemd[1]: cri-containerd-665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e.scope: Deactivated successfully. Dec 16 12:34:31.425041 containerd[1525]: time="2025-12-16T12:34:31.424981711Z" level=info msg="received container exit event container_id:\"665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e\" id:\"665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e\" pid:3320 exited_at:{seconds:1765888471 nanos:424556041}" Dec 16 12:34:31.440955 containerd[1525]: time="2025-12-16T12:34:31.440892789Z" level=info msg="StopContainer for \"b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4\" with timeout 2 (s)" Dec 16 12:34:31.441451 containerd[1525]: time="2025-12-16T12:34:31.441423417Z" level=info msg="Stop container \"b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4\" with signal terminated" Dec 16 12:34:31.449238 systemd-networkd[1443]: lxc_health: Link DOWN Dec 16 12:34:31.449245 systemd-networkd[1443]: lxc_health: Lost carrier Dec 16 12:34:31.452333 containerd[1525]: time="2025-12-16T12:34:31.452279689Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:34:31.465100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e-rootfs.mount: Deactivated successfully. Dec 16 12:34:31.466608 systemd[1]: cri-containerd-b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4.scope: Deactivated successfully. Dec 16 12:34:31.466944 systemd[1]: cri-containerd-b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4.scope: Consumed 6.866s CPU time, 124.3M memory peak, 140K read from disk, 12.9M written to disk. Dec 16 12:34:31.469148 containerd[1525]: time="2025-12-16T12:34:31.469096946Z" level=info msg="received container exit event container_id:\"b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4\" id:\"b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4\" pid:3353 exited_at:{seconds:1765888471 nanos:468622437}" Dec 16 12:34:31.482857 containerd[1525]: time="2025-12-16T12:34:31.482817153Z" level=info msg="StopContainer for \"665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e\" returns successfully" Dec 16 12:34:31.486698 containerd[1525]: time="2025-12-16T12:34:31.486648466Z" level=info msg="StopPodSandbox for \"fd059ed062b7245aa6d249f90217936b495c7a103b3bb2347481568c03f21779\"" Dec 16 12:34:31.486820 containerd[1525]: time="2025-12-16T12:34:31.486746984Z" level=info msg="Container to stop \"665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:34:31.495216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4-rootfs.mount: Deactivated successfully. Dec 16 12:34:31.498704 systemd[1]: cri-containerd-fd059ed062b7245aa6d249f90217936b495c7a103b3bb2347481568c03f21779.scope: Deactivated successfully. Dec 16 12:34:31.500741 containerd[1525]: time="2025-12-16T12:34:31.500635947Z" level=info msg="received sandbox exit event container_id:\"fd059ed062b7245aa6d249f90217936b495c7a103b3bb2347481568c03f21779\" id:\"fd059ed062b7245aa6d249f90217936b495c7a103b3bb2347481568c03f21779\" exit_status:137 exited_at:{seconds:1765888471 nanos:500320394}" monitor_name=podsandbox Dec 16 12:34:31.506593 containerd[1525]: time="2025-12-16T12:34:31.506552772Z" level=info msg="StopContainer for \"b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4\" returns successfully" Dec 16 12:34:31.507880 containerd[1525]: time="2025-12-16T12:34:31.507818063Z" level=info msg="StopPodSandbox for \"562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489\"" Dec 16 12:34:31.507976 containerd[1525]: time="2025-12-16T12:34:31.507904901Z" level=info msg="Container to stop \"b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:34:31.507976 containerd[1525]: time="2025-12-16T12:34:31.507921261Z" level=info msg="Container to stop \"aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:34:31.507976 containerd[1525]: time="2025-12-16T12:34:31.507930821Z" level=info msg="Container to stop \"9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:34:31.507976 containerd[1525]: time="2025-12-16T12:34:31.507939100Z" level=info msg="Container to stop \"c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:34:31.507976 containerd[1525]: time="2025-12-16T12:34:31.507947020Z" level=info msg="Container to stop \"b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 12:34:31.516993 systemd[1]: cri-containerd-562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489.scope: Deactivated successfully. Dec 16 12:34:31.517564 containerd[1525]: time="2025-12-16T12:34:31.517522122Z" level=info msg="received sandbox exit event container_id:\"562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489\" id:\"562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489\" exit_status:137 exited_at:{seconds:1765888471 nanos:517142531}" monitor_name=podsandbox Dec 16 12:34:31.539917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd059ed062b7245aa6d249f90217936b495c7a103b3bb2347481568c03f21779-rootfs.mount: Deactivated successfully. Dec 16 12:34:31.545329 containerd[1525]: time="2025-12-16T12:34:31.545273330Z" level=info msg="shim disconnected" id=fd059ed062b7245aa6d249f90217936b495c7a103b3bb2347481568c03f21779 namespace=k8s.io Dec 16 12:34:31.550737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489-rootfs.mount: Deactivated successfully. Dec 16 12:34:31.565912 containerd[1525]: time="2025-12-16T12:34:31.545318208Z" level=warning msg="cleaning up after shim disconnected" id=fd059ed062b7245aa6d249f90217936b495c7a103b3bb2347481568c03f21779 namespace=k8s.io Dec 16 12:34:31.565912 containerd[1525]: time="2025-12-16T12:34:31.565901339Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 12:34:31.566231 containerd[1525]: time="2025-12-16T12:34:31.557492091Z" level=info msg="shim disconnected" id=562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489 namespace=k8s.io Dec 16 12:34:31.566404 containerd[1525]: time="2025-12-16T12:34:31.566221532Z" level=warning msg="cleaning up after shim disconnected" id=562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489 namespace=k8s.io Dec 16 12:34:31.566404 containerd[1525]: time="2025-12-16T12:34:31.566378608Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 12:34:31.582414 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd059ed062b7245aa6d249f90217936b495c7a103b3bb2347481568c03f21779-shm.mount: Deactivated successfully. Dec 16 12:34:31.582519 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489-shm.mount: Deactivated successfully. Dec 16 12:34:31.583432 containerd[1525]: time="2025-12-16T12:34:31.583091547Z" level=info msg="TearDown network for sandbox \"562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489\" successfully" Dec 16 12:34:31.583432 containerd[1525]: time="2025-12-16T12:34:31.583139426Z" level=info msg="StopPodSandbox for \"562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489\" returns successfully" Dec 16 12:34:31.583882 containerd[1525]: time="2025-12-16T12:34:31.583842770Z" level=info msg="received sandbox container exit event sandbox_id:\"562241feba421ed2870dfffed37e737b23d58a87aa0af81e90a3e2f5e75ad489\" exit_status:137 exited_at:{seconds:1765888471 nanos:517142531}" monitor_name=criService Dec 16 12:34:31.584680 containerd[1525]: time="2025-12-16T12:34:31.583862930Z" level=info msg="received sandbox container exit event sandbox_id:\"fd059ed062b7245aa6d249f90217936b495c7a103b3bb2347481568c03f21779\" exit_status:137 exited_at:{seconds:1765888471 nanos:500320394}" monitor_name=criService Dec 16 12:34:31.585289 containerd[1525]: time="2025-12-16T12:34:31.584128764Z" level=info msg="TearDown network for sandbox \"fd059ed062b7245aa6d249f90217936b495c7a103b3bb2347481568c03f21779\" successfully" Dec 16 12:34:31.585289 containerd[1525]: time="2025-12-16T12:34:31.585281658Z" level=info msg="StopPodSandbox for \"fd059ed062b7245aa6d249f90217936b495c7a103b3bb2347481568c03f21779\" returns successfully" Dec 16 12:34:31.639804 kubelet[2687]: I1216 12:34:31.637939 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-cni-path\") pod \"c05999da-2e29-4654-a91e-c1548e9fbeae\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " Dec 16 12:34:31.639804 kubelet[2687]: I1216 12:34:31.637991 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvdtl\" (UniqueName: \"kubernetes.io/projected/c05999da-2e29-4654-a91e-c1548e9fbeae-kube-api-access-pvdtl\") pod \"c05999da-2e29-4654-a91e-c1548e9fbeae\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " Dec 16 12:34:31.639804 kubelet[2687]: I1216 12:34:31.638014 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-xtables-lock\") pod \"c05999da-2e29-4654-a91e-c1548e9fbeae\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " Dec 16 12:34:31.639804 kubelet[2687]: I1216 12:34:31.638033 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hspgg\" (UniqueName: \"kubernetes.io/projected/a80395be-ea47-4239-b8b1-6abd5c420fae-kube-api-access-hspgg\") pod \"a80395be-ea47-4239-b8b1-6abd5c420fae\" (UID: \"a80395be-ea47-4239-b8b1-6abd5c420fae\") " Dec 16 12:34:31.639804 kubelet[2687]: I1216 12:34:31.638055 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c05999da-2e29-4654-a91e-c1548e9fbeae-clustermesh-secrets\") pod \"c05999da-2e29-4654-a91e-c1548e9fbeae\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " Dec 16 12:34:31.639804 kubelet[2687]: I1216 12:34:31.638072 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-bpf-maps\") pod \"c05999da-2e29-4654-a91e-c1548e9fbeae\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " Dec 16 12:34:31.640399 kubelet[2687]: I1216 12:34:31.638088 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-lib-modules\") pod \"c05999da-2e29-4654-a91e-c1548e9fbeae\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " Dec 16 12:34:31.640399 kubelet[2687]: I1216 12:34:31.638105 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-host-proc-sys-kernel\") pod \"c05999da-2e29-4654-a91e-c1548e9fbeae\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " Dec 16 12:34:31.640399 kubelet[2687]: I1216 12:34:31.638121 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a80395be-ea47-4239-b8b1-6abd5c420fae-cilium-config-path\") pod \"a80395be-ea47-4239-b8b1-6abd5c420fae\" (UID: \"a80395be-ea47-4239-b8b1-6abd5c420fae\") " Dec 16 12:34:31.640399 kubelet[2687]: I1216 12:34:31.638139 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c05999da-2e29-4654-a91e-c1548e9fbeae-cilium-config-path\") pod \"c05999da-2e29-4654-a91e-c1548e9fbeae\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " Dec 16 12:34:31.640399 kubelet[2687]: I1216 12:34:31.638157 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-hostproc\") pod \"c05999da-2e29-4654-a91e-c1548e9fbeae\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " Dec 16 12:34:31.640399 kubelet[2687]: I1216 12:34:31.638174 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-cilium-cgroup\") pod \"c05999da-2e29-4654-a91e-c1548e9fbeae\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " Dec 16 12:34:31.640524 kubelet[2687]: I1216 12:34:31.638190 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-host-proc-sys-net\") pod \"c05999da-2e29-4654-a91e-c1548e9fbeae\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " Dec 16 12:34:31.640524 kubelet[2687]: I1216 12:34:31.638203 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-cilium-run\") pod \"c05999da-2e29-4654-a91e-c1548e9fbeae\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " Dec 16 12:34:31.640524 kubelet[2687]: I1216 12:34:31.638217 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-etc-cni-netd\") pod \"c05999da-2e29-4654-a91e-c1548e9fbeae\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " Dec 16 12:34:31.640524 kubelet[2687]: I1216 12:34:31.638238 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c05999da-2e29-4654-a91e-c1548e9fbeae-hubble-tls\") pod \"c05999da-2e29-4654-a91e-c1548e9fbeae\" (UID: \"c05999da-2e29-4654-a91e-c1548e9fbeae\") " Dec 16 12:34:31.640524 kubelet[2687]: I1216 12:34:31.639807 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c05999da-2e29-4654-a91e-c1548e9fbeae" (UID: "c05999da-2e29-4654-a91e-c1548e9fbeae"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:31.640524 kubelet[2687]: I1216 12:34:31.639883 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c05999da-2e29-4654-a91e-c1548e9fbeae" (UID: "c05999da-2e29-4654-a91e-c1548e9fbeae"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:31.640650 kubelet[2687]: I1216 12:34:31.639906 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c05999da-2e29-4654-a91e-c1548e9fbeae" (UID: "c05999da-2e29-4654-a91e-c1548e9fbeae"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:31.640650 kubelet[2687]: I1216 12:34:31.639933 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-hostproc" (OuterVolumeSpecName: "hostproc") pod "c05999da-2e29-4654-a91e-c1548e9fbeae" (UID: "c05999da-2e29-4654-a91e-c1548e9fbeae"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:31.640650 kubelet[2687]: I1216 12:34:31.639958 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c05999da-2e29-4654-a91e-c1548e9fbeae" (UID: "c05999da-2e29-4654-a91e-c1548e9fbeae"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:31.640650 kubelet[2687]: I1216 12:34:31.639974 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c05999da-2e29-4654-a91e-c1548e9fbeae" (UID: "c05999da-2e29-4654-a91e-c1548e9fbeae"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:31.640650 kubelet[2687]: I1216 12:34:31.639992 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c05999da-2e29-4654-a91e-c1548e9fbeae" (UID: "c05999da-2e29-4654-a91e-c1548e9fbeae"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:31.644323 kubelet[2687]: I1216 12:34:31.644070 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c05999da-2e29-4654-a91e-c1548e9fbeae" (UID: "c05999da-2e29-4654-a91e-c1548e9fbeae"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:31.644990 kubelet[2687]: I1216 12:34:31.644943 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c05999da-2e29-4654-a91e-c1548e9fbeae-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c05999da-2e29-4654-a91e-c1548e9fbeae" (UID: "c05999da-2e29-4654-a91e-c1548e9fbeae"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:34:31.647614 kubelet[2687]: I1216 12:34:31.646102 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-cni-path" (OuterVolumeSpecName: "cni-path") pod "c05999da-2e29-4654-a91e-c1548e9fbeae" (UID: "c05999da-2e29-4654-a91e-c1548e9fbeae"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:31.647614 kubelet[2687]: I1216 12:34:31.646937 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c05999da-2e29-4654-a91e-c1548e9fbeae" (UID: "c05999da-2e29-4654-a91e-c1548e9fbeae"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 12:34:31.647614 kubelet[2687]: I1216 12:34:31.647077 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a80395be-ea47-4239-b8b1-6abd5c420fae-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a80395be-ea47-4239-b8b1-6abd5c420fae" (UID: "a80395be-ea47-4239-b8b1-6abd5c420fae"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:34:31.649817 kubelet[2687]: I1216 12:34:31.648971 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c05999da-2e29-4654-a91e-c1548e9fbeae-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c05999da-2e29-4654-a91e-c1548e9fbeae" (UID: "c05999da-2e29-4654-a91e-c1548e9fbeae"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:34:31.649817 kubelet[2687]: I1216 12:34:31.648972 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a80395be-ea47-4239-b8b1-6abd5c420fae-kube-api-access-hspgg" (OuterVolumeSpecName: "kube-api-access-hspgg") pod "a80395be-ea47-4239-b8b1-6abd5c420fae" (UID: "a80395be-ea47-4239-b8b1-6abd5c420fae"). InnerVolumeSpecName "kube-api-access-hspgg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:34:31.653121 kubelet[2687]: I1216 12:34:31.652915 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c05999da-2e29-4654-a91e-c1548e9fbeae-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c05999da-2e29-4654-a91e-c1548e9fbeae" (UID: "c05999da-2e29-4654-a91e-c1548e9fbeae"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:34:31.653121 kubelet[2687]: I1216 12:34:31.653092 2687 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c05999da-2e29-4654-a91e-c1548e9fbeae-kube-api-access-pvdtl" (OuterVolumeSpecName: "kube-api-access-pvdtl") pod "c05999da-2e29-4654-a91e-c1548e9fbeae" (UID: "c05999da-2e29-4654-a91e-c1548e9fbeae"). InnerVolumeSpecName "kube-api-access-pvdtl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:34:31.739186 kubelet[2687]: I1216 12:34:31.739107 2687 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 16 12:34:31.739186 kubelet[2687]: I1216 12:34:31.739145 2687 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hspgg\" (UniqueName: \"kubernetes.io/projected/a80395be-ea47-4239-b8b1-6abd5c420fae-kube-api-access-hspgg\") on node \"localhost\" DevicePath \"\"" Dec 16 12:34:31.739186 kubelet[2687]: I1216 12:34:31.739162 2687 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c05999da-2e29-4654-a91e-c1548e9fbeae-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 16 12:34:31.739186 kubelet[2687]: I1216 12:34:31.739172 2687 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 16 12:34:31.739186 kubelet[2687]: I1216 12:34:31.739180 2687 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 16 12:34:31.739186 kubelet[2687]: I1216 12:34:31.739189 2687 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 16 12:34:31.739186 kubelet[2687]: I1216 12:34:31.739197 2687 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a80395be-ea47-4239-b8b1-6abd5c420fae-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 16 12:34:31.739186 kubelet[2687]: I1216 12:34:31.739208 2687 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c05999da-2e29-4654-a91e-c1548e9fbeae-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 16 12:34:31.739504 kubelet[2687]: I1216 12:34:31.739217 2687 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 16 12:34:31.739504 kubelet[2687]: I1216 12:34:31.739226 2687 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 16 12:34:31.739504 kubelet[2687]: I1216 12:34:31.739236 2687 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 16 12:34:31.739504 kubelet[2687]: I1216 12:34:31.739244 2687 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 16 12:34:31.739504 kubelet[2687]: I1216 12:34:31.739252 2687 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 16 12:34:31.739504 kubelet[2687]: I1216 12:34:31.739260 2687 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c05999da-2e29-4654-a91e-c1548e9fbeae-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 16 12:34:31.739504 kubelet[2687]: I1216 12:34:31.739267 2687 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c05999da-2e29-4654-a91e-c1548e9fbeae-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 16 12:34:31.739504 kubelet[2687]: I1216 12:34:31.739275 2687 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pvdtl\" (UniqueName: \"kubernetes.io/projected/c05999da-2e29-4654-a91e-c1548e9fbeae-kube-api-access-pvdtl\") on node \"localhost\" DevicePath \"\"" Dec 16 12:34:31.914237 systemd[1]: Removed slice kubepods-burstable-podc05999da_2e29_4654_a91e_c1548e9fbeae.slice - libcontainer container kubepods-burstable-podc05999da_2e29_4654_a91e_c1548e9fbeae.slice. Dec 16 12:34:31.914443 systemd[1]: kubepods-burstable-podc05999da_2e29_4654_a91e_c1548e9fbeae.slice: Consumed 6.963s CPU time, 124.6M memory peak, 140K read from disk, 12.9M written to disk. Dec 16 12:34:31.916188 systemd[1]: Removed slice kubepods-besteffort-poda80395be_ea47_4239_b8b1_6abd5c420fae.slice - libcontainer container kubepods-besteffort-poda80395be_ea47_4239_b8b1_6abd5c420fae.slice. Dec 16 12:34:31.964698 kubelet[2687]: E1216 12:34:31.964652 2687 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 12:34:32.138530 kubelet[2687]: I1216 12:34:32.138301 2687 scope.go:117] "RemoveContainer" containerID="b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4" Dec 16 12:34:32.141789 containerd[1525]: time="2025-12-16T12:34:32.141704888Z" level=info msg="RemoveContainer for \"b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4\"" Dec 16 12:34:32.147890 containerd[1525]: time="2025-12-16T12:34:32.147847431Z" level=info msg="RemoveContainer for \"b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4\" returns successfully" Dec 16 12:34:32.148649 kubelet[2687]: I1216 12:34:32.148320 2687 scope.go:117] "RemoveContainer" containerID="aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c" Dec 16 12:34:32.150617 containerd[1525]: time="2025-12-16T12:34:32.150569130Z" level=info msg="RemoveContainer for \"aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c\"" Dec 16 12:34:32.156060 containerd[1525]: time="2025-12-16T12:34:32.156000449Z" level=info msg="RemoveContainer for \"aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c\" returns successfully" Dec 16 12:34:32.157012 kubelet[2687]: I1216 12:34:32.156974 2687 scope.go:117] "RemoveContainer" containerID="b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4" Dec 16 12:34:32.161178 containerd[1525]: time="2025-12-16T12:34:32.161142135Z" level=info msg="RemoveContainer for \"b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4\"" Dec 16 12:34:32.183730 containerd[1525]: time="2025-12-16T12:34:32.183439878Z" level=info msg="RemoveContainer for \"b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4\" returns successfully" Dec 16 12:34:32.183951 kubelet[2687]: I1216 12:34:32.183814 2687 scope.go:117] "RemoveContainer" containerID="c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3" Dec 16 12:34:32.185646 containerd[1525]: time="2025-12-16T12:34:32.185609910Z" level=info msg="RemoveContainer for \"c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3\"" Dec 16 12:34:32.189088 containerd[1525]: time="2025-12-16T12:34:32.189044753Z" level=info msg="RemoveContainer for \"c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3\" returns successfully" Dec 16 12:34:32.189447 kubelet[2687]: I1216 12:34:32.189414 2687 scope.go:117] "RemoveContainer" containerID="9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045" Dec 16 12:34:32.191323 containerd[1525]: time="2025-12-16T12:34:32.191282503Z" level=info msg="RemoveContainer for \"9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045\"" Dec 16 12:34:32.198320 containerd[1525]: time="2025-12-16T12:34:32.198196389Z" level=info msg="RemoveContainer for \"9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045\" returns successfully" Dec 16 12:34:32.198512 kubelet[2687]: I1216 12:34:32.198449 2687 scope.go:117] "RemoveContainer" containerID="b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4" Dec 16 12:34:32.208132 containerd[1525]: time="2025-12-16T12:34:32.198709298Z" level=error msg="ContainerStatus for \"b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4\": not found" Dec 16 12:34:32.211538 kubelet[2687]: E1216 12:34:32.211456 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4\": not found" containerID="b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4" Dec 16 12:34:32.211621 kubelet[2687]: I1216 12:34:32.211550 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4"} err="failed to get container status \"b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0f71dede09001b715477fe0f3bd984689d511ebdf16a35ec62c271cf75569a4\": not found" Dec 16 12:34:32.211678 kubelet[2687]: I1216 12:34:32.211624 2687 scope.go:117] "RemoveContainer" containerID="aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c" Dec 16 12:34:32.212001 containerd[1525]: time="2025-12-16T12:34:32.211958763Z" level=error msg="ContainerStatus for \"aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c\": not found" Dec 16 12:34:32.212329 kubelet[2687]: E1216 12:34:32.212300 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c\": not found" containerID="aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c" Dec 16 12:34:32.212405 kubelet[2687]: I1216 12:34:32.212329 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c"} err="failed to get container status \"aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa1551796a9dacbb14ec4949b077a45376da35a9e5698e2fad44c10a47c57b2c\": not found" Dec 16 12:34:32.212405 kubelet[2687]: I1216 12:34:32.212354 2687 scope.go:117] "RemoveContainer" containerID="b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4" Dec 16 12:34:32.212597 containerd[1525]: time="2025-12-16T12:34:32.212558229Z" level=error msg="ContainerStatus for \"b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4\": not found" Dec 16 12:34:32.212699 kubelet[2687]: E1216 12:34:32.212679 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4\": not found" containerID="b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4" Dec 16 12:34:32.212735 kubelet[2687]: I1216 12:34:32.212702 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4"} err="failed to get container status \"b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"b41cccc901a1529d5710d59d68d7bba94abf7dc3d65a13f61bad66b2850049f4\": not found" Dec 16 12:34:32.212735 kubelet[2687]: I1216 12:34:32.212715 2687 scope.go:117] "RemoveContainer" containerID="c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3" Dec 16 12:34:32.213040 containerd[1525]: time="2025-12-16T12:34:32.212973660Z" level=error msg="ContainerStatus for \"c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3\": not found" Dec 16 12:34:32.213113 kubelet[2687]: E1216 12:34:32.213072 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3\": not found" containerID="c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3" Dec 16 12:34:32.213113 kubelet[2687]: I1216 12:34:32.213092 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3"} err="failed to get container status \"c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4877f25bd76c9e648e4f4289369c3765ca63bac4537b478821a32686f3b45d3\": not found" Dec 16 12:34:32.213113 kubelet[2687]: I1216 12:34:32.213104 2687 scope.go:117] "RemoveContainer" containerID="9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045" Dec 16 12:34:32.213253 containerd[1525]: time="2025-12-16T12:34:32.213225214Z" level=error msg="ContainerStatus for \"9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045\": not found" Dec 16 12:34:32.213329 kubelet[2687]: E1216 12:34:32.213311 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045\": not found" containerID="9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045" Dec 16 12:34:32.213381 kubelet[2687]: I1216 12:34:32.213330 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045"} err="failed to get container status \"9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d320ad7f2efbd14dfc4a1658e925b89a7e8469128ea4b06a7df758c5e146045\": not found" Dec 16 12:34:32.213381 kubelet[2687]: I1216 12:34:32.213342 2687 scope.go:117] "RemoveContainer" containerID="665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e" Dec 16 12:34:32.215479 containerd[1525]: time="2025-12-16T12:34:32.215044694Z" level=info msg="RemoveContainer for \"665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e\"" Dec 16 12:34:32.226927 containerd[1525]: time="2025-12-16T12:34:32.226862711Z" level=info msg="RemoveContainer for \"665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e\" returns successfully" Dec 16 12:34:32.227593 kubelet[2687]: I1216 12:34:32.227453 2687 scope.go:117] "RemoveContainer" containerID="665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e" Dec 16 12:34:32.228139 containerd[1525]: time="2025-12-16T12:34:32.228064764Z" level=error msg="ContainerStatus for \"665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e\": not found" Dec 16 12:34:32.228442 kubelet[2687]: E1216 12:34:32.228371 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e\": not found" containerID="665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e" Dec 16 12:34:32.228442 kubelet[2687]: I1216 12:34:32.228404 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e"} err="failed to get container status \"665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e\": rpc error: code = NotFound desc = an error occurred when try to find container \"665a8a0a9baa3d832c01670e284fff966b1da7a4fe05e1dcffec7a0e6d1c0d3e\": not found" Dec 16 12:34:32.466849 systemd[1]: var-lib-kubelet-pods-a80395be\x2dea47\x2d4239\x2db8b1\x2d6abd5c420fae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhspgg.mount: Deactivated successfully. Dec 16 12:34:32.467127 systemd[1]: var-lib-kubelet-pods-c05999da\x2d2e29\x2d4654\x2da91e\x2dc1548e9fbeae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpvdtl.mount: Deactivated successfully. Dec 16 12:34:32.467244 systemd[1]: var-lib-kubelet-pods-c05999da\x2d2e29\x2d4654\x2da91e\x2dc1548e9fbeae-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 16 12:34:32.467390 systemd[1]: var-lib-kubelet-pods-c05999da\x2d2e29\x2d4654\x2da91e\x2dc1548e9fbeae-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 16 12:34:33.340989 sshd[4290]: Connection closed by 10.0.0.1 port 36660 Dec 16 12:34:33.340931 sshd-session[4287]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:33.350566 systemd[1]: sshd@21-10.0.0.83:22-10.0.0.1:36660.service: Deactivated successfully. Dec 16 12:34:33.352444 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:34:33.352794 systemd[1]: session-22.scope: Consumed 1.145s CPU time, 23.9M memory peak. Dec 16 12:34:33.353294 systemd-logind[1509]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:34:33.356106 systemd[1]: Started sshd@22-10.0.0.83:22-10.0.0.1:38022.service - OpenSSH per-connection server daemon (10.0.0.1:38022). Dec 16 12:34:33.356645 systemd-logind[1509]: Removed session 22. Dec 16 12:34:33.435688 sshd[4437]: Accepted publickey for core from 10.0.0.1 port 38022 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:33.437369 sshd-session[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:33.445910 systemd-logind[1509]: New session 23 of user core. Dec 16 12:34:33.457015 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:34:33.698812 kubelet[2687]: I1216 12:34:33.698212 2687 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T12:34:33Z","lastTransitionTime":"2025-12-16T12:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 16 12:34:33.907699 kubelet[2687]: I1216 12:34:33.907659 2687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a80395be-ea47-4239-b8b1-6abd5c420fae" path="/var/lib/kubelet/pods/a80395be-ea47-4239-b8b1-6abd5c420fae/volumes" Dec 16 12:34:33.908145 kubelet[2687]: I1216 12:34:33.908124 2687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c05999da-2e29-4654-a91e-c1548e9fbeae" path="/var/lib/kubelet/pods/c05999da-2e29-4654-a91e-c1548e9fbeae/volumes" Dec 16 12:34:34.386493 sshd[4440]: Connection closed by 10.0.0.1 port 38022 Dec 16 12:34:34.387792 sshd-session[4437]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:34.395787 systemd[1]: sshd@22-10.0.0.83:22-10.0.0.1:38022.service: Deactivated successfully. Dec 16 12:34:34.398136 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:34:34.398998 systemd-logind[1509]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:34:34.400376 systemd-logind[1509]: Removed session 23. Dec 16 12:34:34.403133 systemd[1]: Started sshd@23-10.0.0.83:22-10.0.0.1:38024.service - OpenSSH per-connection server daemon (10.0.0.1:38024). Dec 16 12:34:34.446934 systemd[1]: Created slice kubepods-burstable-podf86a4a12_f96e_4b4e_b5ea_e982dae4b4ce.slice - libcontainer container kubepods-burstable-podf86a4a12_f96e_4b4e_b5ea_e982dae4b4ce.slice. Dec 16 12:34:34.488434 sshd[4452]: Accepted publickey for core from 10.0.0.1 port 38024 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:34.489732 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:34.493715 systemd-logind[1509]: New session 24 of user core. Dec 16 12:34:34.502943 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:34:34.554348 kubelet[2687]: I1216 12:34:34.554134 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce-cni-path\") pod \"cilium-q7lm8\" (UID: \"f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce\") " pod="kube-system/cilium-q7lm8" Dec 16 12:34:34.554348 kubelet[2687]: I1216 12:34:34.554171 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce-etc-cni-netd\") pod \"cilium-q7lm8\" (UID: \"f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce\") " pod="kube-system/cilium-q7lm8" Dec 16 12:34:34.554348 kubelet[2687]: I1216 12:34:34.554190 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce-cilium-ipsec-secrets\") pod \"cilium-q7lm8\" (UID: \"f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce\") " pod="kube-system/cilium-q7lm8" Dec 16 12:34:34.554348 kubelet[2687]: I1216 12:34:34.554206 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce-host-proc-sys-kernel\") pod \"cilium-q7lm8\" (UID: \"f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce\") " pod="kube-system/cilium-q7lm8" Dec 16 12:34:34.554348 kubelet[2687]: I1216 12:34:34.554222 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl4d4\" (UniqueName: \"kubernetes.io/projected/f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce-kube-api-access-gl4d4\") pod \"cilium-q7lm8\" (UID: \"f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce\") " pod="kube-system/cilium-q7lm8" Dec 16 12:34:34.554348 kubelet[2687]: I1216 12:34:34.554238 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce-cilium-run\") pod \"cilium-q7lm8\" (UID: \"f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce\") " pod="kube-system/cilium-q7lm8" Dec 16 12:34:34.554574 kubelet[2687]: I1216 12:34:34.554252 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce-cilium-config-path\") pod \"cilium-q7lm8\" (UID: \"f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce\") " pod="kube-system/cilium-q7lm8" Dec 16 12:34:34.554574 kubelet[2687]: I1216 12:34:34.554267 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce-hubble-tls\") pod \"cilium-q7lm8\" (UID: \"f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce\") " pod="kube-system/cilium-q7lm8" Dec 16 12:34:34.554574 kubelet[2687]: I1216 12:34:34.554282 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce-hostproc\") pod \"cilium-q7lm8\" (UID: \"f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce\") " pod="kube-system/cilium-q7lm8" Dec 16 12:34:34.554574 kubelet[2687]: I1216 12:34:34.554307 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce-cilium-cgroup\") pod \"cilium-q7lm8\" (UID: \"f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce\") " pod="kube-system/cilium-q7lm8" Dec 16 12:34:34.554574 kubelet[2687]: I1216 12:34:34.554358 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce-bpf-maps\") pod \"cilium-q7lm8\" (UID: \"f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce\") " pod="kube-system/cilium-q7lm8" Dec 16 12:34:34.554574 kubelet[2687]: I1216 12:34:34.554424 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce-xtables-lock\") pod \"cilium-q7lm8\" (UID: \"f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce\") " pod="kube-system/cilium-q7lm8" Dec 16 12:34:34.554688 kubelet[2687]: I1216 12:34:34.554464 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce-lib-modules\") pod \"cilium-q7lm8\" (UID: \"f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce\") " pod="kube-system/cilium-q7lm8" Dec 16 12:34:34.554688 kubelet[2687]: I1216 12:34:34.554481 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce-clustermesh-secrets\") pod \"cilium-q7lm8\" (UID: \"f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce\") " pod="kube-system/cilium-q7lm8" Dec 16 12:34:34.554688 kubelet[2687]: I1216 12:34:34.554499 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce-host-proc-sys-net\") pod \"cilium-q7lm8\" (UID: \"f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce\") " pod="kube-system/cilium-q7lm8" Dec 16 12:34:34.556365 sshd[4455]: Connection closed by 10.0.0.1 port 38024 Dec 16 12:34:34.556819 sshd-session[4452]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:34.565906 systemd[1]: sshd@23-10.0.0.83:22-10.0.0.1:38024.service: Deactivated successfully. Dec 16 12:34:34.567595 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:34:34.568336 systemd-logind[1509]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:34:34.570561 systemd[1]: Started sshd@24-10.0.0.83:22-10.0.0.1:38038.service - OpenSSH per-connection server daemon (10.0.0.1:38038). Dec 16 12:34:34.572623 systemd-logind[1509]: Removed session 24. Dec 16 12:34:34.638636 sshd[4462]: Accepted publickey for core from 10.0.0.1 port 38038 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:34.640057 sshd-session[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:34.643915 systemd-logind[1509]: New session 25 of user core. Dec 16 12:34:34.654948 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:34:34.756884 containerd[1525]: time="2025-12-16T12:34:34.756831073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q7lm8,Uid:f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce,Namespace:kube-system,Attempt:0,}" Dec 16 12:34:34.788742 containerd[1525]: time="2025-12-16T12:34:34.788692235Z" level=info msg="connecting to shim aad3719bb8998bc2229dd983fd59294c1150378408f20093dd73d862e1d852b2" address="unix:///run/containerd/s/d99ca4792ed032a963ad18648b605837c25620f752433600e380700c233a0ff7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:34:34.809005 systemd[1]: Started cri-containerd-aad3719bb8998bc2229dd983fd59294c1150378408f20093dd73d862e1d852b2.scope - libcontainer container aad3719bb8998bc2229dd983fd59294c1150378408f20093dd73d862e1d852b2. Dec 16 12:34:34.842238 containerd[1525]: time="2025-12-16T12:34:34.842112617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q7lm8,Uid:f86a4a12-f96e-4b4e-b5ea-e982dae4b4ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"aad3719bb8998bc2229dd983fd59294c1150378408f20093dd73d862e1d852b2\"" Dec 16 12:34:34.848153 containerd[1525]: time="2025-12-16T12:34:34.848116170Z" level=info msg="CreateContainer within sandbox \"aad3719bb8998bc2229dd983fd59294c1150378408f20093dd73d862e1d852b2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 12:34:34.858350 containerd[1525]: time="2025-12-16T12:34:34.858297993Z" level=info msg="Container 68bd6aaffdba2c46bd87358804d4ee6355ce66c0bb2af0c740a7df1da84d48db: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:34:34.876670 containerd[1525]: time="2025-12-16T12:34:34.876259090Z" level=info msg="CreateContainer within sandbox \"aad3719bb8998bc2229dd983fd59294c1150378408f20093dd73d862e1d852b2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"68bd6aaffdba2c46bd87358804d4ee6355ce66c0bb2af0c740a7df1da84d48db\"" Dec 16 12:34:34.877372 containerd[1525]: time="2025-12-16T12:34:34.877312148Z" level=info msg="StartContainer for \"68bd6aaffdba2c46bd87358804d4ee6355ce66c0bb2af0c740a7df1da84d48db\"" Dec 16 12:34:34.878913 containerd[1525]: time="2025-12-16T12:34:34.878872955Z" level=info msg="connecting to shim 68bd6aaffdba2c46bd87358804d4ee6355ce66c0bb2af0c740a7df1da84d48db" address="unix:///run/containerd/s/d99ca4792ed032a963ad18648b605837c25620f752433600e380700c233a0ff7" protocol=ttrpc version=3 Dec 16 12:34:34.900008 systemd[1]: Started cri-containerd-68bd6aaffdba2c46bd87358804d4ee6355ce66c0bb2af0c740a7df1da84d48db.scope - libcontainer container 68bd6aaffdba2c46bd87358804d4ee6355ce66c0bb2af0c740a7df1da84d48db. Dec 16 12:34:34.939902 systemd[1]: cri-containerd-68bd6aaffdba2c46bd87358804d4ee6355ce66c0bb2af0c740a7df1da84d48db.scope: Deactivated successfully. Dec 16 12:34:34.943786 containerd[1525]: time="2025-12-16T12:34:34.942270485Z" level=info msg="received container exit event container_id:\"68bd6aaffdba2c46bd87358804d4ee6355ce66c0bb2af0c740a7df1da84d48db\" id:\"68bd6aaffdba2c46bd87358804d4ee6355ce66c0bb2af0c740a7df1da84d48db\" pid:4535 exited_at:{seconds:1765888474 nanos:940896754}" Dec 16 12:34:34.943786 containerd[1525]: time="2025-12-16T12:34:34.943266224Z" level=info msg="StartContainer for \"68bd6aaffdba2c46bd87358804d4ee6355ce66c0bb2af0c740a7df1da84d48db\" returns successfully" Dec 16 12:34:35.161206 containerd[1525]: time="2025-12-16T12:34:35.161100097Z" level=info msg="CreateContainer within sandbox \"aad3719bb8998bc2229dd983fd59294c1150378408f20093dd73d862e1d852b2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 12:34:35.171529 containerd[1525]: time="2025-12-16T12:34:35.171464201Z" level=info msg="Container 44fc83f10a059158c59fb7d57c83b01933842c287ca8fb3d9941e7fba5e05dd6: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:34:35.178171 containerd[1525]: time="2025-12-16T12:34:35.178055144Z" level=info msg="CreateContainer within sandbox \"aad3719bb8998bc2229dd983fd59294c1150378408f20093dd73d862e1d852b2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"44fc83f10a059158c59fb7d57c83b01933842c287ca8fb3d9941e7fba5e05dd6\"" Dec 16 12:34:35.178945 containerd[1525]: time="2025-12-16T12:34:35.178916526Z" level=info msg="StartContainer for \"44fc83f10a059158c59fb7d57c83b01933842c287ca8fb3d9941e7fba5e05dd6\"" Dec 16 12:34:35.180040 containerd[1525]: time="2025-12-16T12:34:35.180000783Z" level=info msg="connecting to shim 44fc83f10a059158c59fb7d57c83b01933842c287ca8fb3d9941e7fba5e05dd6" address="unix:///run/containerd/s/d99ca4792ed032a963ad18648b605837c25620f752433600e380700c233a0ff7" protocol=ttrpc version=3 Dec 16 12:34:35.199975 systemd[1]: Started cri-containerd-44fc83f10a059158c59fb7d57c83b01933842c287ca8fb3d9941e7fba5e05dd6.scope - libcontainer container 44fc83f10a059158c59fb7d57c83b01933842c287ca8fb3d9941e7fba5e05dd6. Dec 16 12:34:35.235097 systemd[1]: cri-containerd-44fc83f10a059158c59fb7d57c83b01933842c287ca8fb3d9941e7fba5e05dd6.scope: Deactivated successfully. Dec 16 12:34:35.283213 containerd[1525]: time="2025-12-16T12:34:35.283166515Z" level=info msg="received container exit event container_id:\"44fc83f10a059158c59fb7d57c83b01933842c287ca8fb3d9941e7fba5e05dd6\" id:\"44fc83f10a059158c59fb7d57c83b01933842c287ca8fb3d9941e7fba5e05dd6\" pid:4579 exited_at:{seconds:1765888475 nanos:234900440}" Dec 16 12:34:35.284435 containerd[1525]: time="2025-12-16T12:34:35.284379930Z" level=info msg="StartContainer for \"44fc83f10a059158c59fb7d57c83b01933842c287ca8fb3d9941e7fba5e05dd6\" returns successfully" Dec 16 12:34:36.160935 containerd[1525]: time="2025-12-16T12:34:36.160070926Z" level=info msg="CreateContainer within sandbox \"aad3719bb8998bc2229dd983fd59294c1150378408f20093dd73d862e1d852b2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 12:34:36.178027 containerd[1525]: time="2025-12-16T12:34:36.177981761Z" level=info msg="Container 2f22837cf63488654a255a019e9cb12d8430f4e44ef39f7b6b9358d829ffebb2: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:34:36.186039 containerd[1525]: time="2025-12-16T12:34:36.185895400Z" level=info msg="CreateContainer within sandbox \"aad3719bb8998bc2229dd983fd59294c1150378408f20093dd73d862e1d852b2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2f22837cf63488654a255a019e9cb12d8430f4e44ef39f7b6b9358d829ffebb2\"" Dec 16 12:34:36.187742 containerd[1525]: time="2025-12-16T12:34:36.187685363Z" level=info msg="StartContainer for \"2f22837cf63488654a255a019e9cb12d8430f4e44ef39f7b6b9358d829ffebb2\"" Dec 16 12:34:36.189527 containerd[1525]: time="2025-12-16T12:34:36.189494966Z" level=info msg="connecting to shim 2f22837cf63488654a255a019e9cb12d8430f4e44ef39f7b6b9358d829ffebb2" address="unix:///run/containerd/s/d99ca4792ed032a963ad18648b605837c25620f752433600e380700c233a0ff7" protocol=ttrpc version=3 Dec 16 12:34:36.215057 systemd[1]: Started cri-containerd-2f22837cf63488654a255a019e9cb12d8430f4e44ef39f7b6b9358d829ffebb2.scope - libcontainer container 2f22837cf63488654a255a019e9cb12d8430f4e44ef39f7b6b9358d829ffebb2. Dec 16 12:34:36.291212 systemd[1]: cri-containerd-2f22837cf63488654a255a019e9cb12d8430f4e44ef39f7b6b9358d829ffebb2.scope: Deactivated successfully. Dec 16 12:34:36.348754 containerd[1525]: time="2025-12-16T12:34:36.348703483Z" level=info msg="received container exit event container_id:\"2f22837cf63488654a255a019e9cb12d8430f4e44ef39f7b6b9358d829ffebb2\" id:\"2f22837cf63488654a255a019e9cb12d8430f4e44ef39f7b6b9358d829ffebb2\" pid:4623 exited_at:{seconds:1765888476 nanos:292381231}" Dec 16 12:34:36.354141 containerd[1525]: time="2025-12-16T12:34:36.354035175Z" level=info msg="StartContainer for \"2f22837cf63488654a255a019e9cb12d8430f4e44ef39f7b6b9358d829ffebb2\" returns successfully" Dec 16 12:34:36.661983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f22837cf63488654a255a019e9cb12d8430f4e44ef39f7b6b9358d829ffebb2-rootfs.mount: Deactivated successfully. Dec 16 12:34:36.966716 kubelet[2687]: E1216 12:34:36.966590 2687 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 12:34:37.169803 containerd[1525]: time="2025-12-16T12:34:37.169518038Z" level=info msg="CreateContainer within sandbox \"aad3719bb8998bc2229dd983fd59294c1150378408f20093dd73d862e1d852b2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 12:34:37.181799 containerd[1525]: time="2025-12-16T12:34:37.181438201Z" level=info msg="Container 351fdea7b2b85e5013dbf551941f79ebc9fb577447604849310b8ea9cb3eb7aa: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:34:37.194499 containerd[1525]: time="2025-12-16T12:34:37.194439222Z" level=info msg="CreateContainer within sandbox \"aad3719bb8998bc2229dd983fd59294c1150378408f20093dd73d862e1d852b2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"351fdea7b2b85e5013dbf551941f79ebc9fb577447604849310b8ea9cb3eb7aa\"" Dec 16 12:34:37.195414 containerd[1525]: time="2025-12-16T12:34:37.195385643Z" level=info msg="StartContainer for \"351fdea7b2b85e5013dbf551941f79ebc9fb577447604849310b8ea9cb3eb7aa\"" Dec 16 12:34:37.197360 containerd[1525]: time="2025-12-16T12:34:37.197328924Z" level=info msg="connecting to shim 351fdea7b2b85e5013dbf551941f79ebc9fb577447604849310b8ea9cb3eb7aa" address="unix:///run/containerd/s/d99ca4792ed032a963ad18648b605837c25620f752433600e380700c233a0ff7" protocol=ttrpc version=3 Dec 16 12:34:37.226351 systemd[1]: Started cri-containerd-351fdea7b2b85e5013dbf551941f79ebc9fb577447604849310b8ea9cb3eb7aa.scope - libcontainer container 351fdea7b2b85e5013dbf551941f79ebc9fb577447604849310b8ea9cb3eb7aa. Dec 16 12:34:37.265963 systemd[1]: cri-containerd-351fdea7b2b85e5013dbf551941f79ebc9fb577447604849310b8ea9cb3eb7aa.scope: Deactivated successfully. Dec 16 12:34:37.267478 containerd[1525]: time="2025-12-16T12:34:37.267357769Z" level=info msg="received container exit event container_id:\"351fdea7b2b85e5013dbf551941f79ebc9fb577447604849310b8ea9cb3eb7aa\" id:\"351fdea7b2b85e5013dbf551941f79ebc9fb577447604849310b8ea9cb3eb7aa\" pid:4662 exited_at:{seconds:1765888477 nanos:265875358}" Dec 16 12:34:37.277518 containerd[1525]: time="2025-12-16T12:34:37.277354849Z" level=info msg="StartContainer for \"351fdea7b2b85e5013dbf551941f79ebc9fb577447604849310b8ea9cb3eb7aa\" returns successfully" Dec 16 12:34:37.291587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-351fdea7b2b85e5013dbf551941f79ebc9fb577447604849310b8ea9cb3eb7aa-rootfs.mount: Deactivated successfully. Dec 16 12:34:38.181108 containerd[1525]: time="2025-12-16T12:34:38.181028238Z" level=info msg="CreateContainer within sandbox \"aad3719bb8998bc2229dd983fd59294c1150378408f20093dd73d862e1d852b2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 12:34:38.205124 containerd[1525]: time="2025-12-16T12:34:38.205050409Z" level=info msg="Container 7aec6480d1528e27932b525c94afa93f13c3524023c30a80a8251995e0a74e4b: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:34:38.213986 containerd[1525]: time="2025-12-16T12:34:38.213743080Z" level=info msg="CreateContainer within sandbox \"aad3719bb8998bc2229dd983fd59294c1150378408f20093dd73d862e1d852b2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7aec6480d1528e27932b525c94afa93f13c3524023c30a80a8251995e0a74e4b\"" Dec 16 12:34:38.214690 containerd[1525]: time="2025-12-16T12:34:38.214655262Z" level=info msg="StartContainer for \"7aec6480d1528e27932b525c94afa93f13c3524023c30a80a8251995e0a74e4b\"" Dec 16 12:34:38.216178 containerd[1525]: time="2025-12-16T12:34:38.215842639Z" level=info msg="connecting to shim 7aec6480d1528e27932b525c94afa93f13c3524023c30a80a8251995e0a74e4b" address="unix:///run/containerd/s/d99ca4792ed032a963ad18648b605837c25620f752433600e380700c233a0ff7" protocol=ttrpc version=3 Dec 16 12:34:38.241031 systemd[1]: Started cri-containerd-7aec6480d1528e27932b525c94afa93f13c3524023c30a80a8251995e0a74e4b.scope - libcontainer container 7aec6480d1528e27932b525c94afa93f13c3524023c30a80a8251995e0a74e4b. Dec 16 12:34:38.297687 containerd[1525]: time="2025-12-16T12:34:38.297642724Z" level=info msg="StartContainer for \"7aec6480d1528e27932b525c94afa93f13c3524023c30a80a8251995e0a74e4b\" returns successfully" Dec 16 12:34:38.607796 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 16 12:34:39.200035 kubelet[2687]: I1216 12:34:39.199969 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q7lm8" podStartSLOduration=5.199949889 podStartE2EDuration="5.199949889s" podCreationTimestamp="2025-12-16 12:34:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:34:39.199082065 +0000 UTC m=+77.403107111" watchObservedRunningTime="2025-12-16 12:34:39.199949889 +0000 UTC m=+77.403974935" Dec 16 12:34:41.777683 systemd-networkd[1443]: lxc_health: Link UP Dec 16 12:34:41.778043 systemd-networkd[1443]: lxc_health: Gained carrier Dec 16 12:34:43.305173 systemd-networkd[1443]: lxc_health: Gained IPv6LL Dec 16 12:34:47.461795 sshd[4467]: Connection closed by 10.0.0.1 port 38038 Dec 16 12:34:47.462301 sshd-session[4462]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:47.466684 systemd[1]: sshd@24-10.0.0.83:22-10.0.0.1:38038.service: Deactivated successfully. Dec 16 12:34:47.469804 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:34:47.471038 systemd-logind[1509]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:34:47.472104 systemd-logind[1509]: Removed session 25.