Dec 12 17:42:30.799408 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 12 17:42:30.799433 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 12 17:42:30.799443 kernel: KASLR enabled Dec 12 17:42:30.799449 kernel: efi: EFI v2.7 by EDK II Dec 12 17:42:30.799454 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Dec 12 17:42:30.799460 kernel: random: crng init done Dec 12 17:42:30.799481 kernel: secureboot: Secure boot disabled Dec 12 17:42:30.799488 kernel: ACPI: Early table checksum verification disabled Dec 12 17:42:30.799494 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Dec 12 17:42:30.799502 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 12 17:42:30.799508 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:42:30.799515 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:42:30.799558 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:42:30.799567 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:42:30.799574 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:42:30.799584 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:42:30.799590 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:42:30.799597 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:42:30.799603 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:42:30.799610 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 12 17:42:30.799616 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:42:30.799622 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:42:30.799628 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Dec 12 17:42:30.799634 kernel: Zone ranges: Dec 12 17:42:30.799641 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:42:30.799648 kernel: DMA32 empty Dec 12 17:42:30.799654 kernel: Normal empty Dec 12 17:42:30.799660 kernel: Device empty Dec 12 17:42:30.799666 kernel: Movable zone start for each node Dec 12 17:42:30.799673 kernel: Early memory node ranges Dec 12 17:42:30.799679 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Dec 12 17:42:30.799685 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Dec 12 17:42:30.799692 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Dec 12 17:42:30.799698 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Dec 12 17:42:30.799704 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Dec 12 17:42:30.799711 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Dec 12 17:42:30.799717 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Dec 12 17:42:30.799725 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Dec 12 17:42:30.799731 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Dec 12 17:42:30.799738 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 12 17:42:30.799747 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 12 17:42:30.799754 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 12 17:42:30.799761 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 12 17:42:30.799769 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:42:30.799775 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 12 17:42:30.799796 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Dec 12 17:42:30.799803 kernel: psci: probing for conduit method from ACPI. Dec 12 17:42:30.799809 kernel: psci: PSCIv1.1 detected in firmware. Dec 12 17:42:30.799816 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:42:30.799823 kernel: psci: Trusted OS migration not required Dec 12 17:42:30.799829 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:42:30.799836 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 12 17:42:30.799843 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:42:30.799881 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:42:30.799892 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 12 17:42:30.799899 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:42:30.799910 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:42:30.799935 kernel: CPU features: detected: Spectre-v4 Dec 12 17:42:30.799967 kernel: CPU features: detected: Spectre-BHB Dec 12 17:42:30.799985 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:42:30.799993 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:42:30.800003 kernel: CPU features: detected: ARM erratum 1418040 Dec 12 17:42:30.800010 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:42:30.800017 kernel: alternatives: applying boot alternatives Dec 12 17:42:30.800025 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:42:30.800035 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:42:30.800042 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:42:30.800048 kernel: Fallback order for Node 0: 0 Dec 12 17:42:30.800055 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 12 17:42:30.800061 kernel: Policy zone: DMA Dec 12 17:42:30.800068 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:42:30.800075 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 12 17:42:30.800082 kernel: software IO TLB: area num 4. Dec 12 17:42:30.800088 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 12 17:42:30.800094 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Dec 12 17:42:30.800101 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 17:42:30.800140 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:42:30.800148 kernel: rcu: RCU event tracing is enabled. Dec 12 17:42:30.800155 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 17:42:30.800162 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:42:30.800168 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:42:30.800175 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:42:30.800182 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 17:42:30.800188 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:42:30.800195 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:42:30.800202 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:42:30.800208 kernel: GICv3: 256 SPIs implemented Dec 12 17:42:30.800216 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:42:30.800223 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:42:30.800230 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 12 17:42:30.800236 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:42:30.800243 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 12 17:42:30.800249 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 12 17:42:30.800256 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:42:30.800263 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:42:30.800269 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 12 17:42:30.800276 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 12 17:42:30.800283 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:42:30.800290 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:42:30.800298 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 12 17:42:30.800305 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 12 17:42:30.800312 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 12 17:42:30.800350 kernel: arm-pv: using stolen time PV Dec 12 17:42:30.800359 kernel: Console: colour dummy device 80x25 Dec 12 17:42:30.800366 kernel: ACPI: Core revision 20240827 Dec 12 17:42:30.800373 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 12 17:42:30.800380 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:42:30.800386 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:42:30.800393 kernel: landlock: Up and running. Dec 12 17:42:30.800403 kernel: SELinux: Initializing. Dec 12 17:42:30.800410 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:42:30.800416 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:42:30.800423 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:42:30.800430 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:42:30.800437 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:42:30.800444 kernel: Remapping and enabling EFI services. Dec 12 17:42:30.800450 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:42:30.800457 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:42:30.800485 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 12 17:42:30.800493 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 12 17:42:30.800502 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:42:30.800509 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 12 17:42:30.800517 kernel: Detected PIPT I-cache on CPU2 Dec 12 17:42:30.800557 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 12 17:42:30.800566 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 12 17:42:30.800577 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:42:30.800584 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 12 17:42:30.800591 kernel: Detected PIPT I-cache on CPU3 Dec 12 17:42:30.800598 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 12 17:42:30.800605 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 12 17:42:30.800612 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:42:30.800619 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 12 17:42:30.800626 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 17:42:30.800633 kernel: SMP: Total of 4 processors activated. Dec 12 17:42:30.800640 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:42:30.800649 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:42:30.800656 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:42:30.800663 kernel: CPU features: detected: Common not Private translations Dec 12 17:42:30.800670 kernel: CPU features: detected: CRC32 instructions Dec 12 17:42:30.800677 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 12 17:42:30.800684 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:42:30.800691 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:42:30.800698 kernel: CPU features: detected: Privileged Access Never Dec 12 17:42:30.800705 kernel: CPU features: detected: RAS Extension Support Dec 12 17:42:30.800715 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:42:30.800722 kernel: alternatives: applying system-wide alternatives Dec 12 17:42:30.800729 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 12 17:42:30.800762 kernel: Memory: 2423776K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 126176K reserved, 16384K cma-reserved) Dec 12 17:42:30.800775 kernel: devtmpfs: initialized Dec 12 17:42:30.800782 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:42:30.800790 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 17:42:30.800797 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:42:30.800804 kernel: 0 pages in range for non-PLT usage Dec 12 17:42:30.800815 kernel: 508400 pages in range for PLT usage Dec 12 17:42:30.800822 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:42:30.800829 kernel: SMBIOS 3.0.0 present. Dec 12 17:42:30.800836 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 12 17:42:30.800843 kernel: DMI: Memory slots populated: 1/1 Dec 12 17:42:30.800850 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:42:30.800858 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:42:30.800865 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:42:30.800872 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:42:30.800881 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:42:30.800888 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Dec 12 17:42:30.800895 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:42:30.800902 kernel: cpuidle: using governor menu Dec 12 17:42:30.800909 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:42:30.800916 kernel: ASID allocator initialised with 32768 entries Dec 12 17:42:30.800929 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:42:30.800937 kernel: Serial: AMBA PL011 UART driver Dec 12 17:42:30.800944 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:42:30.800954 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:42:30.800994 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:42:30.801002 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:42:30.801010 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:42:30.801018 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:42:30.801026 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:42:30.801033 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:42:30.801041 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:42:30.801048 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:42:30.801059 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:42:30.801067 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:42:30.801074 kernel: ACPI: Interpreter enabled Dec 12 17:42:30.801081 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:42:30.801089 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:42:30.801096 kernel: ACPI: CPU0 has been hot-added Dec 12 17:42:30.801103 kernel: ACPI: CPU1 has been hot-added Dec 12 17:42:30.801110 kernel: ACPI: CPU2 has been hot-added Dec 12 17:42:30.801118 kernel: ACPI: CPU3 has been hot-added Dec 12 17:42:30.801127 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:42:30.801134 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:42:30.801141 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 17:42:30.801342 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:42:30.801412 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:42:30.801555 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:42:30.801630 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 12 17:42:30.801695 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 12 17:42:30.801705 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 12 17:42:30.801752 kernel: PCI host bridge to bus 0000:00 Dec 12 17:42:30.801839 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 12 17:42:30.801896 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:42:30.802002 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 12 17:42:30.802067 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 17:42:30.802201 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:42:30.802295 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 17:42:30.802361 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 12 17:42:30.802499 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 12 17:42:30.802575 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 12 17:42:30.802690 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 12 17:42:30.802773 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 12 17:42:30.802839 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 12 17:42:30.802957 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 12 17:42:30.803020 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:42:30.803074 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 12 17:42:30.803119 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:42:30.803128 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:42:30.803136 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:42:30.803143 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:42:30.803155 kernel: iommu: Default domain type: Translated Dec 12 17:42:30.803162 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:42:30.803169 kernel: efivars: Registered efivars operations Dec 12 17:42:30.803177 kernel: vgaarb: loaded Dec 12 17:42:30.803184 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:42:30.803191 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:42:30.803198 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:42:30.803206 kernel: pnp: PnP ACPI init Dec 12 17:42:30.803325 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 12 17:42:30.803344 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:42:30.803352 kernel: NET: Registered PF_INET protocol family Dec 12 17:42:30.803359 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:42:30.803366 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:42:30.803373 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:42:30.803381 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:42:30.803388 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:42:30.803395 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:42:30.803404 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:42:30.803411 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:42:30.803418 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:42:30.803426 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:42:30.803432 kernel: kvm [1]: HYP mode not available Dec 12 17:42:30.803439 kernel: Initialise system trusted keyrings Dec 12 17:42:30.803447 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:42:30.803454 kernel: Key type asymmetric registered Dec 12 17:42:30.803461 kernel: Asymmetric key parser 'x509' registered Dec 12 17:42:30.803495 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:42:30.803503 kernel: io scheduler mq-deadline registered Dec 12 17:42:30.803510 kernel: io scheduler kyber registered Dec 12 17:42:30.803550 kernel: io scheduler bfq registered Dec 12 17:42:30.803558 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:42:30.803565 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:42:30.803573 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:42:30.803663 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 12 17:42:30.803674 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:42:30.803684 kernel: thunder_xcv, ver 1.0 Dec 12 17:42:30.803691 kernel: thunder_bgx, ver 1.0 Dec 12 17:42:30.803698 kernel: nicpf, ver 1.0 Dec 12 17:42:30.803705 kernel: nicvf, ver 1.0 Dec 12 17:42:30.803824 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:42:30.803887 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:42:30 UTC (1765561350) Dec 12 17:42:30.803897 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:42:30.803904 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:42:30.803915 kernel: watchdog: NMI not fully supported Dec 12 17:42:30.803959 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:42:30.803969 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:42:30.803977 kernel: Segment Routing with IPv6 Dec 12 17:42:30.803984 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:42:30.803991 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:42:30.803998 kernel: Key type dns_resolver registered Dec 12 17:42:30.804005 kernel: registered taskstats version 1 Dec 12 17:42:30.804012 kernel: Loading compiled-in X.509 certificates Dec 12 17:42:30.804023 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 12 17:42:30.804030 kernel: Demotion targets for Node 0: null Dec 12 17:42:30.804037 kernel: Key type .fscrypt registered Dec 12 17:42:30.804044 kernel: Key type fscrypt-provisioning registered Dec 12 17:42:30.804051 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:42:30.804058 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:42:30.804065 kernel: ima: No architecture policies found Dec 12 17:42:30.804072 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:42:30.804079 kernel: clk: Disabling unused clocks Dec 12 17:42:30.804086 kernel: PM: genpd: Disabling unused power domains Dec 12 17:42:30.804095 kernel: Warning: unable to open an initial console. Dec 12 17:42:30.804102 kernel: Freeing unused kernel memory: 39552K Dec 12 17:42:30.804109 kernel: Run /init as init process Dec 12 17:42:30.804116 kernel: with arguments: Dec 12 17:42:30.804123 kernel: /init Dec 12 17:42:30.804130 kernel: with environment: Dec 12 17:42:30.804137 kernel: HOME=/ Dec 12 17:42:30.804144 kernel: TERM=linux Dec 12 17:42:30.804152 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:42:30.804198 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:42:30.804206 systemd[1]: Detected virtualization kvm. Dec 12 17:42:30.804214 systemd[1]: Detected architecture arm64. Dec 12 17:42:30.804221 systemd[1]: Running in initrd. Dec 12 17:42:30.804229 systemd[1]: No hostname configured, using default hostname. Dec 12 17:42:30.804236 systemd[1]: Hostname set to . Dec 12 17:42:30.804246 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:42:30.804254 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:42:30.804262 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:42:30.804270 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:42:30.804278 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:42:30.804286 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:42:30.804294 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:42:30.804302 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:42:30.804313 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 17:42:30.804320 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 17:42:30.804328 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:42:30.804336 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:42:30.804343 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:42:30.804351 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:42:30.804358 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:42:30.804366 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:42:30.804404 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:42:30.804414 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:42:30.804421 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:42:30.804429 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:42:30.804437 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:42:30.804444 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:42:30.804452 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:42:30.804460 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:42:30.804487 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:42:30.804495 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:42:30.804503 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:42:30.804512 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:42:30.804519 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:42:30.804527 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:42:30.804535 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:42:30.804542 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:42:30.804550 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:42:30.804560 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:42:30.804567 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:42:30.804632 systemd-journald[245]: Collecting audit messages is disabled. Dec 12 17:42:30.804661 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:42:30.804670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:42:30.804680 systemd-journald[245]: Journal started Dec 12 17:42:30.804699 systemd-journald[245]: Runtime Journal (/run/log/journal/ec3ecd490a764aa6a1d5eb4b454cdb71) is 6M, max 48.5M, 42.4M free. Dec 12 17:42:30.809578 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:42:30.797874 systemd-modules-load[246]: Inserted module 'overlay' Dec 12 17:42:30.815208 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:42:30.815232 kernel: Bridge firewalling registered Dec 12 17:42:30.815243 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:42:30.813458 systemd-modules-load[246]: Inserted module 'br_netfilter' Dec 12 17:42:30.816589 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:42:30.825138 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:42:30.826944 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:42:30.828632 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:42:30.833843 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:42:30.840522 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:42:30.844306 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:42:30.844387 systemd-tmpfiles[265]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:42:30.848134 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:42:30.853082 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:42:30.854383 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:42:30.861757 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:42:30.878669 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:42:30.888896 systemd-resolved[288]: Positive Trust Anchors: Dec 12 17:42:30.888916 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:42:30.888957 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:42:30.896617 systemd-resolved[288]: Defaulting to hostname 'linux'. Dec 12 17:42:30.897795 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:42:30.900999 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:42:30.967479 kernel: SCSI subsystem initialized Dec 12 17:42:30.971498 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:42:30.979513 kernel: iscsi: registered transport (tcp) Dec 12 17:42:30.992488 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:42:30.992528 kernel: QLogic iSCSI HBA Driver Dec 12 17:42:31.010674 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:42:31.039178 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:42:31.042085 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:42:31.090948 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:42:31.093852 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:42:31.156505 kernel: raid6: neonx8 gen() 15780 MB/s Dec 12 17:42:31.173511 kernel: raid6: neonx4 gen() 15807 MB/s Dec 12 17:42:31.190494 kernel: raid6: neonx2 gen() 13190 MB/s Dec 12 17:42:31.207486 kernel: raid6: neonx1 gen() 10410 MB/s Dec 12 17:42:31.224485 kernel: raid6: int64x8 gen() 6902 MB/s Dec 12 17:42:31.241481 kernel: raid6: int64x4 gen() 7352 MB/s Dec 12 17:42:31.258483 kernel: raid6: int64x2 gen() 6108 MB/s Dec 12 17:42:31.275515 kernel: raid6: int64x1 gen() 5055 MB/s Dec 12 17:42:31.275543 kernel: raid6: using algorithm neonx4 gen() 15807 MB/s Dec 12 17:42:31.293542 kernel: raid6: .... xor() 12313 MB/s, rmw enabled Dec 12 17:42:31.293597 kernel: raid6: using neon recovery algorithm Dec 12 17:42:31.298487 kernel: xor: measuring software checksum speed Dec 12 17:42:31.299598 kernel: 8regs : 19263 MB/sec Dec 12 17:42:31.299612 kernel: 32regs : 21681 MB/sec Dec 12 17:42:31.300835 kernel: arm64_neon : 28003 MB/sec Dec 12 17:42:31.300851 kernel: xor: using function: arm64_neon (28003 MB/sec) Dec 12 17:42:31.353520 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:42:31.359436 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:42:31.362003 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:42:31.389771 systemd-udevd[499]: Using default interface naming scheme 'v255'. Dec 12 17:42:31.393983 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:42:31.397320 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:42:31.420653 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Dec 12 17:42:31.446342 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:42:31.449156 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:42:31.500499 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:42:31.503697 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:42:31.554516 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 12 17:42:31.565039 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 12 17:42:31.568485 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:42:31.568541 kernel: GPT:9289727 != 19775487 Dec 12 17:42:31.568560 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:42:31.568570 kernel: GPT:9289727 != 19775487 Dec 12 17:42:31.568840 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:42:31.569925 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:42:31.571248 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:42:31.571380 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:42:31.584377 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:42:31.586892 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:42:31.624595 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 17:42:31.634538 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 17:42:31.636018 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:42:31.638906 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:42:31.646948 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 17:42:31.648258 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 12 17:42:31.658991 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:42:31.660431 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:42:31.662704 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:42:31.664555 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:42:31.667393 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:42:31.669407 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:42:31.689140 disk-uuid[591]: Primary Header is updated. Dec 12 17:42:31.689140 disk-uuid[591]: Secondary Entries is updated. Dec 12 17:42:31.689140 disk-uuid[591]: Secondary Header is updated. Dec 12 17:42:31.694520 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:42:31.691664 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:42:32.702490 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:42:32.702861 disk-uuid[597]: The operation has completed successfully. Dec 12 17:42:32.728359 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:42:32.728499 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:42:32.761562 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 17:42:32.786677 sh[611]: Success Dec 12 17:42:32.807995 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:42:32.808058 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:42:32.808070 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:42:32.818980 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:42:32.850757 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:42:32.861284 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 17:42:32.864197 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 17:42:32.874481 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (623) Dec 12 17:42:32.876493 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 12 17:42:32.876530 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:42:32.881167 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:42:32.881220 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:42:32.882603 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 17:42:32.883998 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:42:32.887120 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:42:32.888122 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:42:32.909237 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:42:32.928495 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (652) Dec 12 17:42:32.931679 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:42:32.931750 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:42:32.935013 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:42:32.935080 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:42:32.940545 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:42:32.941875 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:42:32.946627 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:42:33.021515 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:42:33.024359 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:42:33.055065 ignition[699]: Ignition 2.22.0 Dec 12 17:42:33.055082 ignition[699]: Stage: fetch-offline Dec 12 17:42:33.055120 ignition[699]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:42:33.055129 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:42:33.055216 ignition[699]: parsed url from cmdline: "" Dec 12 17:42:33.055220 ignition[699]: no config URL provided Dec 12 17:42:33.055225 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:42:33.055232 ignition[699]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:42:33.055254 ignition[699]: op(1): [started] loading QEMU firmware config module Dec 12 17:42:33.055259 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 12 17:42:33.062281 ignition[699]: op(1): [finished] loading QEMU firmware config module Dec 12 17:42:33.064769 systemd-networkd[804]: lo: Link UP Dec 12 17:42:33.064773 systemd-networkd[804]: lo: Gained carrier Dec 12 17:42:33.065793 systemd-networkd[804]: Enumeration completed Dec 12 17:42:33.065908 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:42:33.066297 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:42:33.066300 systemd-networkd[804]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:42:33.067352 systemd-networkd[804]: eth0: Link UP Dec 12 17:42:33.067532 systemd[1]: Reached target network.target - Network. Dec 12 17:42:33.067764 systemd-networkd[804]: eth0: Gained carrier Dec 12 17:42:33.067780 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:42:33.094531 systemd-networkd[804]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:42:33.116082 ignition[699]: parsing config with SHA512: 2df323b3736f6c8040b83e7699c99d38932225354d5ad9236a8f2d255587250e63c3d318f63a7154e4776c3ffd87c97cf04672e0874f2b80febd4887a9c2a8b0 Dec 12 17:42:33.122696 unknown[699]: fetched base config from "system" Dec 12 17:42:33.123130 ignition[699]: fetch-offline: fetch-offline passed Dec 12 17:42:33.122708 unknown[699]: fetched user config from "qemu" Dec 12 17:42:33.123202 ignition[699]: Ignition finished successfully Dec 12 17:42:33.127055 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:42:33.128987 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 12 17:42:33.130019 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:42:33.173187 ignition[812]: Ignition 2.22.0 Dec 12 17:42:33.173206 ignition[812]: Stage: kargs Dec 12 17:42:33.173354 ignition[812]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:42:33.173363 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:42:33.174186 ignition[812]: kargs: kargs passed Dec 12 17:42:33.174251 ignition[812]: Ignition finished successfully Dec 12 17:42:33.179202 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:42:33.181422 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:42:33.210753 ignition[820]: Ignition 2.22.0 Dec 12 17:42:33.210773 ignition[820]: Stage: disks Dec 12 17:42:33.210938 ignition[820]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:42:33.210948 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:42:33.211771 ignition[820]: disks: disks passed Dec 12 17:42:33.214168 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:42:33.211822 ignition[820]: Ignition finished successfully Dec 12 17:42:33.215586 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:42:33.217149 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:42:33.218754 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:42:33.220385 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:42:33.222263 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:42:33.224848 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:42:33.245519 systemd-resolved[288]: Detected conflict on linux IN A 10.0.0.114 Dec 12 17:42:33.245536 systemd-resolved[288]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Dec 12 17:42:33.248342 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 17:42:33.255538 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:42:33.258280 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:42:33.325502 kernel: EXT4-fs (vda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 12 17:42:33.326227 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:42:33.327614 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:42:33.330655 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:42:33.332992 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:42:33.333989 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:42:33.334034 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:42:33.334065 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:42:33.348353 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:42:33.350622 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:42:33.354753 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (838) Dec 12 17:42:33.354784 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:42:33.356611 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:42:33.359744 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:42:33.359798 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:42:33.361327 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:42:33.391111 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:42:33.396032 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:42:33.400338 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:42:33.405008 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:42:33.481179 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:42:33.483229 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:42:33.486809 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:42:33.514127 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:42:33.521543 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:42:33.533814 ignition[954]: INFO : Ignition 2.22.0 Dec 12 17:42:33.533814 ignition[954]: INFO : Stage: mount Dec 12 17:42:33.536477 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:42:33.536477 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:42:33.536477 ignition[954]: INFO : mount: mount passed Dec 12 17:42:33.536477 ignition[954]: INFO : Ignition finished successfully Dec 12 17:42:33.537299 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:42:33.539285 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:42:33.874527 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:42:33.876067 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:42:33.903527 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Dec 12 17:42:33.903584 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:42:33.903595 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:42:33.906844 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:42:33.906887 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:42:33.908315 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:42:33.938965 ignition[983]: INFO : Ignition 2.22.0 Dec 12 17:42:33.938965 ignition[983]: INFO : Stage: files Dec 12 17:42:33.940551 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:42:33.940551 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:42:33.940551 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:42:33.943808 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:42:33.943808 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:42:33.943808 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:42:33.943808 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:42:33.943808 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:42:33.943250 unknown[983]: wrote ssh authorized keys file for user: core Dec 12 17:42:33.951390 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 12 17:42:33.951390 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 12 17:42:33.986592 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:42:34.102795 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 12 17:42:34.104583 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 17:42:34.104583 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 12 17:42:34.311072 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 12 17:42:34.382636 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 17:42:34.382636 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:42:34.386203 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:42:34.386203 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:42:34.386203 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:42:34.386203 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:42:34.386203 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:42:34.386203 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:42:34.386203 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:42:34.398492 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:42:34.398492 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:42:34.398492 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:42:34.398492 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:42:34.398492 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:42:34.398492 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 12 17:42:34.425686 systemd-networkd[804]: eth0: Gained IPv6LL Dec 12 17:42:34.587476 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 12 17:42:34.818927 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:42:34.818927 ignition[983]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 12 17:42:34.823103 ignition[983]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:42:34.867872 ignition[983]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:42:34.867872 ignition[983]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 12 17:42:34.867872 ignition[983]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 12 17:42:34.867872 ignition[983]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:42:34.876115 ignition[983]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:42:34.876115 ignition[983]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 12 17:42:34.876115 ignition[983]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 12 17:42:34.887293 ignition[983]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:42:34.891382 ignition[983]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:42:34.892948 ignition[983]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 12 17:42:34.892948 ignition[983]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:42:34.892948 ignition[983]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:42:34.892948 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:42:34.892948 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:42:34.892948 ignition[983]: INFO : files: files passed Dec 12 17:42:34.892948 ignition[983]: INFO : Ignition finished successfully Dec 12 17:42:34.896533 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:42:34.899552 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:42:34.904420 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:42:34.915669 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:42:34.915782 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:42:34.919029 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Dec 12 17:42:34.920834 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:42:34.920834 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:42:34.924248 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:42:34.924231 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:42:34.925766 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:42:34.929516 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:42:34.982281 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:42:34.982427 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:42:34.984906 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:42:34.986456 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:42:34.988271 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:42:34.989182 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:42:35.021860 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:42:35.027324 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:42:35.051095 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:42:35.052782 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:42:35.054851 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:42:35.057722 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:42:35.057861 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:42:35.060502 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:42:35.064055 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:42:35.065260 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:42:35.067305 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:42:35.069327 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:42:35.071337 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:42:35.073363 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:42:35.075343 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:42:35.077427 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:42:35.079547 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:42:35.081512 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:42:35.083199 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:42:35.083348 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:42:35.085758 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:42:35.087717 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:42:35.089832 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:42:35.090565 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:42:35.092018 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:42:35.092159 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:42:35.094896 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:42:35.095034 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:42:35.096982 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:42:35.098523 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:42:35.099586 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:42:35.101731 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:42:35.103253 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:42:35.105048 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:42:35.105150 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:42:35.107279 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:42:35.107366 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:42:35.108964 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:42:35.109099 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:42:35.110799 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:42:35.110920 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:42:35.113338 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:42:35.115620 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:42:35.116507 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:42:35.116641 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:42:35.118741 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:42:35.118838 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:42:35.124648 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:42:35.130650 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:42:35.139809 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:42:35.146710 ignition[1038]: INFO : Ignition 2.22.0 Dec 12 17:42:35.146710 ignition[1038]: INFO : Stage: umount Dec 12 17:42:35.148389 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:42:35.148389 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:42:35.148389 ignition[1038]: INFO : umount: umount passed Dec 12 17:42:35.148389 ignition[1038]: INFO : Ignition finished successfully Dec 12 17:42:35.150807 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:42:35.152517 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:42:35.155508 systemd[1]: Stopped target network.target - Network. Dec 12 17:42:35.156367 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:42:35.156440 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:42:35.159127 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:42:35.159182 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:42:35.160850 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:42:35.160900 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:42:35.162792 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:42:35.162837 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:42:35.164816 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:42:35.166550 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:42:35.174559 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:42:35.174680 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:42:35.178013 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 17:42:35.178215 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:42:35.178333 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:42:35.183238 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 17:42:35.183871 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:42:35.185548 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:42:35.185589 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:42:35.188396 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:42:35.190303 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:42:35.190369 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:42:35.192580 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:42:35.192627 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:42:35.196175 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:42:35.196224 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:42:35.198260 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:42:35.198310 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:42:35.201251 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:42:35.206106 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 17:42:35.206178 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:42:35.214841 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:42:35.217790 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:42:35.219997 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:42:35.220625 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:42:35.222313 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:42:35.222417 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:42:35.224948 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:42:35.225110 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:42:35.227243 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:42:35.227283 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:42:35.228961 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:42:35.228997 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:42:35.231049 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:42:35.231103 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:42:35.233732 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:42:35.233781 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:42:35.236406 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:42:35.236479 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:42:35.240264 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:42:35.241405 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:42:35.241491 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:42:35.245777 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:42:35.245824 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:42:35.248964 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 17:42:35.249011 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:42:35.256709 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:42:35.256958 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:42:35.258778 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:42:35.258831 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:42:35.264731 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 17:42:35.264795 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 12 17:42:35.264826 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 17:42:35.264858 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:42:35.265154 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:42:35.265236 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:42:35.270423 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:42:35.277890 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:42:35.298788 systemd[1]: Switching root. Dec 12 17:42:35.338228 systemd-journald[245]: Journal stopped Dec 12 17:42:36.137218 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Dec 12 17:42:36.137265 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:42:36.137278 kernel: SELinux: policy capability open_perms=1 Dec 12 17:42:36.137287 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:42:36.137299 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:42:36.137310 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:42:36.137319 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:42:36.137328 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:42:36.137338 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:42:36.137348 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:42:36.137362 kernel: audit: type=1403 audit(1765561355.525:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 17:42:36.137375 systemd[1]: Successfully loaded SELinux policy in 56.168ms. Dec 12 17:42:36.137396 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.397ms. Dec 12 17:42:36.137407 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:42:36.137418 systemd[1]: Detected virtualization kvm. Dec 12 17:42:36.137429 systemd[1]: Detected architecture arm64. Dec 12 17:42:36.137439 systemd[1]: Detected first boot. Dec 12 17:42:36.137453 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:42:36.137565 zram_generator::config[1085]: No configuration found. Dec 12 17:42:36.137579 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:42:36.137590 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:42:36.137601 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 17:42:36.137612 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:42:36.137622 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:42:36.137633 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:42:36.137646 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:42:36.137657 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:42:36.137668 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:42:36.137678 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:42:36.137689 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:42:36.137700 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:42:36.137715 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:42:36.137727 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:42:36.137737 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:42:36.137750 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:42:36.137761 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:42:36.137772 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:42:36.137785 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:42:36.137796 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:42:36.137807 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:42:36.137817 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:42:36.137829 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:42:36.137840 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:42:36.137850 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:42:36.137861 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:42:36.137871 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:42:36.137882 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:42:36.137892 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:42:36.137902 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:42:36.137938 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:42:36.137949 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:42:36.137961 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:42:36.137972 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:42:36.137995 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:42:36.138006 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:42:36.138017 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:42:36.138027 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:42:36.138038 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:42:36.138048 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:42:36.138058 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:42:36.138069 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:42:36.138080 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:42:36.138090 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:42:36.138101 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:42:36.138111 systemd[1]: Reached target machines.target - Containers. Dec 12 17:42:36.138123 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:42:36.138133 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:42:36.138145 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:42:36.138158 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:42:36.138168 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:42:36.138178 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:42:36.138188 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:42:36.138203 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:42:36.138213 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:42:36.138224 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:42:36.138234 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:42:36.138246 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:42:36.138257 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:42:36.138267 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:42:36.138278 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:42:36.138289 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:42:36.138299 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:42:36.138308 kernel: fuse: init (API version 7.41) Dec 12 17:42:36.138317 kernel: loop: module loaded Dec 12 17:42:36.138326 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:42:36.138337 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:42:36.138348 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:42:36.138358 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:42:36.138368 kernel: ACPI: bus type drm_connector registered Dec 12 17:42:36.138377 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 17:42:36.138390 systemd[1]: Stopped verity-setup.service. Dec 12 17:42:36.138400 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:42:36.138410 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:42:36.138420 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:42:36.138459 systemd-journald[1150]: Collecting audit messages is disabled. Dec 12 17:42:36.138505 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:42:36.138517 systemd-journald[1150]: Journal started Dec 12 17:42:36.138537 systemd-journald[1150]: Runtime Journal (/run/log/journal/ec3ecd490a764aa6a1d5eb4b454cdb71) is 6M, max 48.5M, 42.4M free. Dec 12 17:42:35.894292 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:42:35.916527 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 17:42:35.916945 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:42:36.140734 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:42:36.142896 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:42:36.143629 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:42:36.146523 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:42:36.147997 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:42:36.149616 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:42:36.149807 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:42:36.151272 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:42:36.151442 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:42:36.152900 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:42:36.153096 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:42:36.154678 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:42:36.154837 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:42:36.156345 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:42:36.156502 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:42:36.157835 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:42:36.158027 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:42:36.159432 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:42:36.160899 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:42:36.162605 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:42:36.164196 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:42:36.177407 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:42:36.180031 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:42:36.182553 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:42:36.183893 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:42:36.183941 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:42:36.185945 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:42:36.192406 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:42:36.193688 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:42:36.194865 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:42:36.197022 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:42:36.198324 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:42:36.201606 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:42:36.202781 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:42:36.205421 systemd-journald[1150]: Time spent on flushing to /var/log/journal/ec3ecd490a764aa6a1d5eb4b454cdb71 is 15.723ms for 892 entries. Dec 12 17:42:36.205421 systemd-journald[1150]: System Journal (/var/log/journal/ec3ecd490a764aa6a1d5eb4b454cdb71) is 8M, max 195.6M, 187.6M free. Dec 12 17:42:36.236609 systemd-journald[1150]: Received client request to flush runtime journal. Dec 12 17:42:36.236669 kernel: loop0: detected capacity change from 0 to 119840 Dec 12 17:42:36.206156 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:42:36.209697 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:42:36.213687 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:42:36.223797 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:42:36.225983 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:42:36.228360 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:42:36.231764 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:42:36.236547 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:42:36.240672 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:42:36.243735 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:42:36.256728 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:42:36.259491 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:42:36.276022 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Dec 12 17:42:36.276415 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Dec 12 17:42:36.279928 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:42:36.284542 kernel: loop1: detected capacity change from 0 to 207008 Dec 12 17:42:36.284734 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:42:36.331676 kernel: loop2: detected capacity change from 0 to 100632 Dec 12 17:42:36.339527 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:42:36.342037 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:42:36.363633 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Dec 12 17:42:36.363653 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Dec 12 17:42:36.364503 kernel: loop3: detected capacity change from 0 to 119840 Dec 12 17:42:36.366769 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:42:36.376522 kernel: loop4: detected capacity change from 0 to 207008 Dec 12 17:42:36.382509 kernel: loop5: detected capacity change from 0 to 100632 Dec 12 17:42:36.387867 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 12 17:42:36.388280 (sd-merge)[1226]: Merged extensions into '/usr'. Dec 12 17:42:36.392187 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:42:36.396012 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:42:36.396029 systemd[1]: Reloading... Dec 12 17:42:36.448497 zram_generator::config[1252]: No configuration found. Dec 12 17:42:36.539792 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:42:36.600496 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:42:36.600737 systemd[1]: Reloading finished in 204 ms. Dec 12 17:42:36.616301 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:42:36.617776 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:42:36.630892 systemd[1]: Starting ensure-sysext.service... Dec 12 17:42:36.632842 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:42:36.642795 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:42:36.642811 systemd[1]: Reloading... Dec 12 17:42:36.646871 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:42:36.646915 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:42:36.647165 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:42:36.647362 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 17:42:36.647997 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 17:42:36.648218 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Dec 12 17:42:36.648265 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Dec 12 17:42:36.659025 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:42:36.659038 systemd-tmpfiles[1289]: Skipping /boot Dec 12 17:42:36.666952 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:42:36.666966 systemd-tmpfiles[1289]: Skipping /boot Dec 12 17:42:36.695522 zram_generator::config[1319]: No configuration found. Dec 12 17:42:36.825362 systemd[1]: Reloading finished in 182 ms. Dec 12 17:42:36.850365 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:42:36.856307 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:42:36.868814 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:42:36.871412 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:42:36.881357 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:42:36.884779 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:42:36.888074 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:42:36.891988 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:42:36.898163 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:42:36.901423 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:42:36.907339 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:42:36.911981 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:42:36.914335 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:42:36.915572 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:42:36.915772 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:42:36.924696 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:42:36.929461 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:42:36.931143 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:42:36.931556 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:42:36.938311 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:42:36.939754 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:42:36.941145 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:42:36.941155 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Dec 12 17:42:36.941299 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:42:36.942427 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:42:36.945256 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:42:36.945424 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:42:36.948259 augenrules[1386]: No rules Dec 12 17:42:36.949842 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:42:36.950125 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:42:36.952019 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:42:36.954011 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:42:36.954185 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:42:36.967667 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:42:36.969412 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:42:36.971049 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:42:36.972961 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:42:36.973199 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:42:36.981758 systemd[1]: Finished ensure-sysext.service. Dec 12 17:42:36.987718 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:42:36.989413 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:42:36.990703 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:42:36.992942 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:42:37.002684 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:42:37.005154 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:42:37.005205 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:42:37.008714 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:42:37.009664 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:42:37.015771 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 17:42:37.018597 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:42:37.028985 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:42:37.035430 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:42:37.036974 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:42:37.037148 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:42:37.039955 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:42:37.048828 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:42:37.050538 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:42:37.052307 augenrules[1427]: /sbin/augenrules: No change Dec 12 17:42:37.054800 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:42:37.063770 augenrules[1461]: No rules Dec 12 17:42:37.065423 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:42:37.067561 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:42:37.110231 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:42:37.113455 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:42:37.114410 systemd-resolved[1355]: Positive Trust Anchors: Dec 12 17:42:37.114430 systemd-resolved[1355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:42:37.114477 systemd-resolved[1355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:42:37.122433 systemd-resolved[1355]: Defaulting to hostname 'linux'. Dec 12 17:42:37.123849 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:42:37.125239 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:42:37.135965 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:42:37.142802 systemd-networkd[1434]: lo: Link UP Dec 12 17:42:37.142809 systemd-networkd[1434]: lo: Gained carrier Dec 12 17:42:37.143752 systemd-networkd[1434]: Enumeration completed Dec 12 17:42:37.143876 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:42:37.144195 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:42:37.144199 systemd-networkd[1434]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:42:37.144743 systemd-networkd[1434]: eth0: Link UP Dec 12 17:42:37.144855 systemd-networkd[1434]: eth0: Gained carrier Dec 12 17:42:37.144874 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:42:37.145118 systemd[1]: Reached target network.target - Network. Dec 12 17:42:37.147534 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:42:37.150493 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:42:37.157652 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 17:42:37.158880 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:42:37.159925 systemd-networkd[1434]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:42:37.160699 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:42:37.162046 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Dec 12 17:42:37.162665 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:42:37.163141 systemd-timesyncd[1435]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 12 17:42:37.163207 systemd-timesyncd[1435]: Initial clock synchronization to Fri 2025-12-12 17:42:37.475345 UTC. Dec 12 17:42:37.164643 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:42:37.165861 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:42:37.165890 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:42:37.167458 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:42:37.168626 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:42:37.170035 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:42:37.171310 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:42:37.173035 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:42:37.177547 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:42:37.180426 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:42:37.182649 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:42:37.183953 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:42:37.188444 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:42:37.190932 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:42:37.193112 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:42:37.194561 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:42:37.199137 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:42:37.200145 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:42:37.201127 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:42:37.201159 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:42:37.203615 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:42:37.206865 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:42:37.217790 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:42:37.221620 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:42:37.225744 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:42:37.227571 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:42:37.235678 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:42:37.239082 jq[1495]: false Dec 12 17:42:37.240223 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:42:37.246213 extend-filesystems[1496]: Found /dev/vda6 Dec 12 17:42:37.250571 extend-filesystems[1496]: Found /dev/vda9 Dec 12 17:42:37.251359 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:42:37.253889 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:42:37.256912 extend-filesystems[1496]: Checking size of /dev/vda9 Dec 12 17:42:37.259218 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:42:37.261455 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:42:37.262049 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:42:37.262803 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:42:37.268268 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:42:37.269690 extend-filesystems[1496]: Resized partition /dev/vda9 Dec 12 17:42:37.272598 extend-filesystems[1521]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:42:37.273526 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:42:37.277221 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:42:37.277425 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:42:37.277699 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:42:37.277872 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:42:37.279611 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 12 17:42:37.282829 jq[1518]: true Dec 12 17:42:37.281767 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:42:37.281969 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:42:37.326510 jq[1525]: true Dec 12 17:42:37.299061 (ntainerd)[1526]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 17:42:37.312311 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:42:37.336037 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 12 17:42:37.336114 tar[1524]: linux-arm64/LICENSE Dec 12 17:42:37.355249 dbus-daemon[1492]: [system] SELinux support is enabled Dec 12 17:42:37.362611 extend-filesystems[1521]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 17:42:37.362611 extend-filesystems[1521]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 12 17:42:37.362611 extend-filesystems[1521]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 12 17:42:37.377460 update_engine[1514]: I20251212 17:42:37.356459 1514 main.cc:92] Flatcar Update Engine starting Dec 12 17:42:37.377460 update_engine[1514]: I20251212 17:42:37.372735 1514 update_check_scheduler.cc:74] Next update check in 11m51s Dec 12 17:42:37.355436 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:42:37.377783 tar[1524]: linux-arm64/helm Dec 12 17:42:37.377807 extend-filesystems[1496]: Resized filesystem in /dev/vda9 Dec 12 17:42:37.362549 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:42:37.362573 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:42:37.366459 systemd-logind[1510]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:42:37.366590 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:42:37.366608 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:42:37.368539 systemd-logind[1510]: New seat seat0. Dec 12 17:42:37.370938 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:42:37.371338 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:42:37.390677 bash[1558]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:42:37.419212 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:42:37.421931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:42:37.423502 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:42:37.427619 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:42:37.429335 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 17:42:37.431751 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:42:37.444254 sshd_keygen[1523]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:42:37.473536 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:42:37.476812 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:42:37.481822 locksmithd[1569]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:42:37.488760 containerd[1526]: time="2025-12-12T17:42:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:42:37.489451 containerd[1526]: time="2025-12-12T17:42:37.489307760Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 17:42:37.492956 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:42:37.493151 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:42:37.501180 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:42:37.502736 containerd[1526]: time="2025-12-12T17:42:37.502689680Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.96µs" Dec 12 17:42:37.503686 containerd[1526]: time="2025-12-12T17:42:37.502980080Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:42:37.505100 containerd[1526]: time="2025-12-12T17:42:37.503764960Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:42:37.505100 containerd[1526]: time="2025-12-12T17:42:37.503957320Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:42:37.505100 containerd[1526]: time="2025-12-12T17:42:37.503977800Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:42:37.505100 containerd[1526]: time="2025-12-12T17:42:37.504005560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:42:37.505100 containerd[1526]: time="2025-12-12T17:42:37.504057760Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:42:37.505100 containerd[1526]: time="2025-12-12T17:42:37.504069080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:42:37.505100 containerd[1526]: time="2025-12-12T17:42:37.504286400Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:42:37.505100 containerd[1526]: time="2025-12-12T17:42:37.504301200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:42:37.505100 containerd[1526]: time="2025-12-12T17:42:37.504311800Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:42:37.505100 containerd[1526]: time="2025-12-12T17:42:37.504320600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:42:37.505100 containerd[1526]: time="2025-12-12T17:42:37.504387040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:42:37.505100 containerd[1526]: time="2025-12-12T17:42:37.504592640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:42:37.505425 containerd[1526]: time="2025-12-12T17:42:37.504620440Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:42:37.505425 containerd[1526]: time="2025-12-12T17:42:37.504631960Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:42:37.505425 containerd[1526]: time="2025-12-12T17:42:37.504662040Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:42:37.505425 containerd[1526]: time="2025-12-12T17:42:37.504884040Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:42:37.505425 containerd[1526]: time="2025-12-12T17:42:37.504963640Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:42:37.509262 containerd[1526]: time="2025-12-12T17:42:37.509223480Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:42:37.509420 containerd[1526]: time="2025-12-12T17:42:37.509406680Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:42:37.509554 containerd[1526]: time="2025-12-12T17:42:37.509537920Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:42:37.509615 containerd[1526]: time="2025-12-12T17:42:37.509602800Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:42:37.509673 containerd[1526]: time="2025-12-12T17:42:37.509659520Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:42:37.509729 containerd[1526]: time="2025-12-12T17:42:37.509715240Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:42:37.509812 containerd[1526]: time="2025-12-12T17:42:37.509796840Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:42:37.509866 containerd[1526]: time="2025-12-12T17:42:37.509854320Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:42:37.509930 containerd[1526]: time="2025-12-12T17:42:37.509915880Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:42:37.509984 containerd[1526]: time="2025-12-12T17:42:37.509971840Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:42:37.510034 containerd[1526]: time="2025-12-12T17:42:37.510020560Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:42:37.510086 containerd[1526]: time="2025-12-12T17:42:37.510072960Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:42:37.510302 containerd[1526]: time="2025-12-12T17:42:37.510278200Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:42:37.510388 containerd[1526]: time="2025-12-12T17:42:37.510372720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:42:37.510449 containerd[1526]: time="2025-12-12T17:42:37.510434000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:42:37.510543 containerd[1526]: time="2025-12-12T17:42:37.510528080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:42:37.510595 containerd[1526]: time="2025-12-12T17:42:37.510582880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:42:37.510644 containerd[1526]: time="2025-12-12T17:42:37.510632160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:42:37.510715 containerd[1526]: time="2025-12-12T17:42:37.510700560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:42:37.510794 containerd[1526]: time="2025-12-12T17:42:37.510759200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:42:37.510863 containerd[1526]: time="2025-12-12T17:42:37.510849040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:42:37.510937 containerd[1526]: time="2025-12-12T17:42:37.510924520Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:42:37.510989 containerd[1526]: time="2025-12-12T17:42:37.510977400Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:42:37.511218 containerd[1526]: time="2025-12-12T17:42:37.511202960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:42:37.511277 containerd[1526]: time="2025-12-12T17:42:37.511266040Z" level=info msg="Start snapshots syncer" Dec 12 17:42:37.511352 containerd[1526]: time="2025-12-12T17:42:37.511339800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:42:37.512536 containerd[1526]: time="2025-12-12T17:42:37.512377320Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:42:37.512703 containerd[1526]: time="2025-12-12T17:42:37.512682640Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:42:37.512818 containerd[1526]: time="2025-12-12T17:42:37.512804800Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:42:37.513159 containerd[1526]: time="2025-12-12T17:42:37.513121320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:42:37.513245 containerd[1526]: time="2025-12-12T17:42:37.513231960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:42:37.513299 containerd[1526]: time="2025-12-12T17:42:37.513285960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:42:37.513366 containerd[1526]: time="2025-12-12T17:42:37.513343760Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:42:37.513425 containerd[1526]: time="2025-12-12T17:42:37.513411880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:42:37.513494 containerd[1526]: time="2025-12-12T17:42:37.513480960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:42:37.513565 containerd[1526]: time="2025-12-12T17:42:37.513551280Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:42:37.513635 containerd[1526]: time="2025-12-12T17:42:37.513622160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:42:37.513688 containerd[1526]: time="2025-12-12T17:42:37.513676480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:42:37.513741 containerd[1526]: time="2025-12-12T17:42:37.513727640Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:42:37.513825 containerd[1526]: time="2025-12-12T17:42:37.513810000Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:42:37.513960 containerd[1526]: time="2025-12-12T17:42:37.513942000Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:42:37.514215 containerd[1526]: time="2025-12-12T17:42:37.514012760Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:42:37.514215 containerd[1526]: time="2025-12-12T17:42:37.514031160Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:42:37.514215 containerd[1526]: time="2025-12-12T17:42:37.514040160Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:42:37.514215 containerd[1526]: time="2025-12-12T17:42:37.514050200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:42:37.514215 containerd[1526]: time="2025-12-12T17:42:37.514060480Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:42:37.514215 containerd[1526]: time="2025-12-12T17:42:37.514138480Z" level=info msg="runtime interface created" Dec 12 17:42:37.514215 containerd[1526]: time="2025-12-12T17:42:37.514143640Z" level=info msg="created NRI interface" Dec 12 17:42:37.514215 containerd[1526]: time="2025-12-12T17:42:37.514151560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:42:37.514215 containerd[1526]: time="2025-12-12T17:42:37.514163800Z" level=info msg="Connect containerd service" Dec 12 17:42:37.514215 containerd[1526]: time="2025-12-12T17:42:37.514186360Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:42:37.515413 containerd[1526]: time="2025-12-12T17:42:37.515341640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:42:37.521729 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:42:37.524875 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:42:37.527428 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:42:37.529711 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:42:37.586891 containerd[1526]: time="2025-12-12T17:42:37.586772920Z" level=info msg="Start subscribing containerd event" Dec 12 17:42:37.586891 containerd[1526]: time="2025-12-12T17:42:37.586904280Z" level=info msg="Start recovering state" Dec 12 17:42:37.587038 containerd[1526]: time="2025-12-12T17:42:37.587006080Z" level=info msg="Start event monitor" Dec 12 17:42:37.587038 containerd[1526]: time="2025-12-12T17:42:37.587030200Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:42:37.587075 containerd[1526]: time="2025-12-12T17:42:37.587066680Z" level=info msg="Start streaming server" Dec 12 17:42:37.587093 containerd[1526]: time="2025-12-12T17:42:37.587080800Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:42:37.587093 containerd[1526]: time="2025-12-12T17:42:37.587088120Z" level=info msg="runtime interface starting up..." Dec 12 17:42:37.587127 containerd[1526]: time="2025-12-12T17:42:37.587093160Z" level=info msg="starting plugins..." Dec 12 17:42:37.587144 containerd[1526]: time="2025-12-12T17:42:37.587127200Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:42:37.587543 containerd[1526]: time="2025-12-12T17:42:37.587519080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:42:37.587583 containerd[1526]: time="2025-12-12T17:42:37.587576360Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:42:37.588874 containerd[1526]: time="2025-12-12T17:42:37.587668920Z" level=info msg="containerd successfully booted in 0.099410s" Dec 12 17:42:37.587777 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:42:37.672874 tar[1524]: linux-arm64/README.md Dec 12 17:42:37.689868 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:42:38.715808 systemd-networkd[1434]: eth0: Gained IPv6LL Dec 12 17:42:38.718321 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:42:38.720231 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:42:38.722636 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 17:42:38.724948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:42:38.741790 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:42:38.759240 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 17:42:38.759480 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 17:42:38.762263 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:42:38.764322 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:42:39.311490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:42:39.313080 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:42:39.315008 systemd[1]: Startup finished in 2.120s (kernel) + 4.906s (initrd) + 3.846s (userspace) = 10.872s. Dec 12 17:42:39.324940 (kubelet)[1633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:42:39.682165 kubelet[1633]: E1212 17:42:39.682063 1633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:42:39.684599 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:42:39.684730 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:42:39.685053 systemd[1]: kubelet.service: Consumed 743ms CPU time, 256.3M memory peak. Dec 12 17:42:43.838524 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:42:43.839624 systemd[1]: Started sshd@0-10.0.0.114:22-10.0.0.1:56506.service - OpenSSH per-connection server daemon (10.0.0.1:56506). Dec 12 17:42:43.929842 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 56506 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:42:43.931738 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:42:43.937754 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:42:43.938860 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:42:43.945553 systemd-logind[1510]: New session 1 of user core. Dec 12 17:42:43.971363 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:42:43.973986 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:42:43.991565 (systemd)[1651]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:42:43.993729 systemd-logind[1510]: New session c1 of user core. Dec 12 17:42:44.097321 systemd[1651]: Queued start job for default target default.target. Dec 12 17:42:44.108435 systemd[1651]: Created slice app.slice - User Application Slice. Dec 12 17:42:44.108467 systemd[1651]: Reached target paths.target - Paths. Dec 12 17:42:44.108525 systemd[1651]: Reached target timers.target - Timers. Dec 12 17:42:44.109697 systemd[1651]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:42:44.119256 systemd[1651]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:42:44.119318 systemd[1651]: Reached target sockets.target - Sockets. Dec 12 17:42:44.119355 systemd[1651]: Reached target basic.target - Basic System. Dec 12 17:42:44.119382 systemd[1651]: Reached target default.target - Main User Target. Dec 12 17:42:44.119408 systemd[1651]: Startup finished in 119ms. Dec 12 17:42:44.119549 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:42:44.120981 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:42:44.187595 systemd[1]: Started sshd@1-10.0.0.114:22-10.0.0.1:56518.service - OpenSSH per-connection server daemon (10.0.0.1:56518). Dec 12 17:42:44.251284 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 56518 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:42:44.252693 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:42:44.256767 systemd-logind[1510]: New session 2 of user core. Dec 12 17:42:44.268681 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:42:44.321218 sshd[1665]: Connection closed by 10.0.0.1 port 56518 Dec 12 17:42:44.321798 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Dec 12 17:42:44.332817 systemd[1]: sshd@1-10.0.0.114:22-10.0.0.1:56518.service: Deactivated successfully. Dec 12 17:42:44.334358 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:42:44.335116 systemd-logind[1510]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:42:44.337722 systemd[1]: Started sshd@2-10.0.0.114:22-10.0.0.1:56534.service - OpenSSH per-connection server daemon (10.0.0.1:56534). Dec 12 17:42:44.338706 systemd-logind[1510]: Removed session 2. Dec 12 17:42:44.403213 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 56534 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:42:44.405141 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:42:44.410322 systemd-logind[1510]: New session 3 of user core. Dec 12 17:42:44.416622 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:42:44.466042 sshd[1674]: Connection closed by 10.0.0.1 port 56534 Dec 12 17:42:44.466529 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Dec 12 17:42:44.479777 systemd[1]: sshd@2-10.0.0.114:22-10.0.0.1:56534.service: Deactivated successfully. Dec 12 17:42:44.482576 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:42:44.483443 systemd-logind[1510]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:42:44.485850 systemd[1]: Started sshd@3-10.0.0.114:22-10.0.0.1:56550.service - OpenSSH per-connection server daemon (10.0.0.1:56550). Dec 12 17:42:44.486631 systemd-logind[1510]: Removed session 3. Dec 12 17:42:44.545798 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 56550 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:42:44.547268 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:42:44.552114 systemd-logind[1510]: New session 4 of user core. Dec 12 17:42:44.563727 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:42:44.619719 sshd[1683]: Connection closed by 10.0.0.1 port 56550 Dec 12 17:42:44.620123 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Dec 12 17:42:44.633759 systemd[1]: sshd@3-10.0.0.114:22-10.0.0.1:56550.service: Deactivated successfully. Dec 12 17:42:44.635263 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:42:44.636060 systemd-logind[1510]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:42:44.638519 systemd[1]: Started sshd@4-10.0.0.114:22-10.0.0.1:56554.service - OpenSSH per-connection server daemon (10.0.0.1:56554). Dec 12 17:42:44.639231 systemd-logind[1510]: Removed session 4. Dec 12 17:42:44.694853 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 56554 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:42:44.696142 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:42:44.701315 systemd-logind[1510]: New session 5 of user core. Dec 12 17:42:44.709701 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:42:44.771192 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:42:44.771478 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:42:44.794663 sudo[1693]: pam_unix(sudo:session): session closed for user root Dec 12 17:42:44.796415 sshd[1692]: Connection closed by 10.0.0.1 port 56554 Dec 12 17:42:44.796968 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Dec 12 17:42:44.821861 systemd[1]: sshd@4-10.0.0.114:22-10.0.0.1:56554.service: Deactivated successfully. Dec 12 17:42:44.823584 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:42:44.824313 systemd-logind[1510]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:42:44.827690 systemd[1]: Started sshd@5-10.0.0.114:22-10.0.0.1:56560.service - OpenSSH per-connection server daemon (10.0.0.1:56560). Dec 12 17:42:44.828986 systemd-logind[1510]: Removed session 5. Dec 12 17:42:44.886791 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 56560 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:42:44.888204 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:42:44.893588 systemd-logind[1510]: New session 6 of user core. Dec 12 17:42:44.904689 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:42:44.957423 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:42:44.958180 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:42:45.041759 sudo[1704]: pam_unix(sudo:session): session closed for user root Dec 12 17:42:45.047242 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:42:45.047584 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:42:45.062574 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:42:45.106598 augenrules[1726]: No rules Dec 12 17:42:45.107827 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:42:45.108346 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:42:45.111538 sudo[1703]: pam_unix(sudo:session): session closed for user root Dec 12 17:42:45.113555 sshd[1702]: Connection closed by 10.0.0.1 port 56560 Dec 12 17:42:45.115912 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Dec 12 17:42:45.127836 systemd[1]: sshd@5-10.0.0.114:22-10.0.0.1:56560.service: Deactivated successfully. Dec 12 17:42:45.129204 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:42:45.130284 systemd-logind[1510]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:42:45.133365 systemd[1]: Started sshd@6-10.0.0.114:22-10.0.0.1:56572.service - OpenSSH per-connection server daemon (10.0.0.1:56572). Dec 12 17:42:45.136984 systemd-logind[1510]: Removed session 6. Dec 12 17:42:45.196101 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 56572 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:42:45.197281 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:42:45.201817 systemd-logind[1510]: New session 7 of user core. Dec 12 17:42:45.216686 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:42:45.271348 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:42:45.271641 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:42:45.567249 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:42:45.582279 (dockerd)[1761]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:42:45.799426 dockerd[1761]: time="2025-12-12T17:42:45.799360051Z" level=info msg="Starting up" Dec 12 17:42:45.800175 dockerd[1761]: time="2025-12-12T17:42:45.800146082Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:42:45.813927 dockerd[1761]: time="2025-12-12T17:42:45.813872990Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:42:45.851156 dockerd[1761]: time="2025-12-12T17:42:45.851026657Z" level=info msg="Loading containers: start." Dec 12 17:42:45.859780 kernel: Initializing XFRM netlink socket Dec 12 17:42:46.062539 systemd-networkd[1434]: docker0: Link UP Dec 12 17:42:46.066291 dockerd[1761]: time="2025-12-12T17:42:46.066223900Z" level=info msg="Loading containers: done." Dec 12 17:42:46.082910 dockerd[1761]: time="2025-12-12T17:42:46.082846901Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:42:46.083063 dockerd[1761]: time="2025-12-12T17:42:46.082943378Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:42:46.083063 dockerd[1761]: time="2025-12-12T17:42:46.083036856Z" level=info msg="Initializing buildkit" Dec 12 17:42:46.105699 dockerd[1761]: time="2025-12-12T17:42:46.105553242Z" level=info msg="Completed buildkit initialization" Dec 12 17:42:46.113241 dockerd[1761]: time="2025-12-12T17:42:46.112885275Z" level=info msg="Daemon has completed initialization" Dec 12 17:42:46.113241 dockerd[1761]: time="2025-12-12T17:42:46.112975590Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:42:46.113353 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:42:46.631532 containerd[1526]: time="2025-12-12T17:42:46.631491577Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 12 17:42:47.182149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1707405561.mount: Deactivated successfully. Dec 12 17:42:48.131677 containerd[1526]: time="2025-12-12T17:42:48.131614647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:48.132318 containerd[1526]: time="2025-12-12T17:42:48.132270556Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=26431961" Dec 12 17:42:48.135561 containerd[1526]: time="2025-12-12T17:42:48.135515915Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:48.138409 containerd[1526]: time="2025-12-12T17:42:48.138359792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:48.139527 containerd[1526]: time="2025-12-12T17:42:48.139346869Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 1.507813484s" Dec 12 17:42:48.139527 containerd[1526]: time="2025-12-12T17:42:48.139377702Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 12 17:42:48.140011 containerd[1526]: time="2025-12-12T17:42:48.139979218Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 12 17:42:49.137056 containerd[1526]: time="2025-12-12T17:42:49.137009340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:49.137624 containerd[1526]: time="2025-12-12T17:42:49.137574131Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22618957" Dec 12 17:42:49.138400 containerd[1526]: time="2025-12-12T17:42:49.138373129Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:49.142503 containerd[1526]: time="2025-12-12T17:42:49.140946937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:49.142503 containerd[1526]: time="2025-12-12T17:42:49.142013640Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.002006751s" Dec 12 17:42:49.142503 containerd[1526]: time="2025-12-12T17:42:49.142043990Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 12 17:42:49.142503 containerd[1526]: time="2025-12-12T17:42:49.142453519Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 12 17:42:49.935115 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:42:49.937271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:42:50.113984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:42:50.124799 (kubelet)[2051]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:42:50.164265 kubelet[2051]: E1212 17:42:50.164193 2051 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:42:50.168535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:42:50.169051 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:42:50.170643 systemd[1]: kubelet.service: Consumed 147ms CPU time, 107.5M memory peak. Dec 12 17:42:50.341109 containerd[1526]: time="2025-12-12T17:42:50.340997527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:50.342581 containerd[1526]: time="2025-12-12T17:42:50.342545446Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17618438" Dec 12 17:42:50.343750 containerd[1526]: time="2025-12-12T17:42:50.343332831Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:50.346327 containerd[1526]: time="2025-12-12T17:42:50.345926027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:50.347002 containerd[1526]: time="2025-12-12T17:42:50.346946792Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.204452601s" Dec 12 17:42:50.347002 containerd[1526]: time="2025-12-12T17:42:50.346999282Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 12 17:42:50.348115 containerd[1526]: time="2025-12-12T17:42:50.347894992Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 12 17:42:51.358531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4082546409.mount: Deactivated successfully. Dec 12 17:42:51.742719 containerd[1526]: time="2025-12-12T17:42:51.742566828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:51.743573 containerd[1526]: time="2025-12-12T17:42:51.743437821Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27561801" Dec 12 17:42:51.744500 containerd[1526]: time="2025-12-12T17:42:51.744475554Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:51.746228 containerd[1526]: time="2025-12-12T17:42:51.746155073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:51.746993 containerd[1526]: time="2025-12-12T17:42:51.746755456Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.398826529s" Dec 12 17:42:51.746993 containerd[1526]: time="2025-12-12T17:42:51.746792832Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 12 17:42:51.748324 containerd[1526]: time="2025-12-12T17:42:51.747369171Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 12 17:42:52.317020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3205666688.mount: Deactivated successfully. Dec 12 17:42:53.042766 containerd[1526]: time="2025-12-12T17:42:53.042713506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:53.043614 containerd[1526]: time="2025-12-12T17:42:53.043574861Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Dec 12 17:42:53.045226 containerd[1526]: time="2025-12-12T17:42:53.044649252Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:53.047405 containerd[1526]: time="2025-12-12T17:42:53.047369429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:53.049375 containerd[1526]: time="2025-12-12T17:42:53.049330226Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.301928893s" Dec 12 17:42:53.049375 containerd[1526]: time="2025-12-12T17:42:53.049374016Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 12 17:42:53.049979 containerd[1526]: time="2025-12-12T17:42:53.049837203Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 17:42:53.553099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount545112408.mount: Deactivated successfully. Dec 12 17:42:53.566905 containerd[1526]: time="2025-12-12T17:42:53.566851967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:42:53.567998 containerd[1526]: time="2025-12-12T17:42:53.567955189Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 12 17:42:53.569058 containerd[1526]: time="2025-12-12T17:42:53.568825752Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:42:53.570867 containerd[1526]: time="2025-12-12T17:42:53.570833797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:42:53.571569 containerd[1526]: time="2025-12-12T17:42:53.571542753Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 521.676718ms" Dec 12 17:42:53.571744 containerd[1526]: time="2025-12-12T17:42:53.571650920Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 12 17:42:53.572543 containerd[1526]: time="2025-12-12T17:42:53.572517220Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 12 17:42:54.154713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3363079700.mount: Deactivated successfully. Dec 12 17:42:55.872384 containerd[1526]: time="2025-12-12T17:42:55.872321321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:55.873193 containerd[1526]: time="2025-12-12T17:42:55.873144067Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Dec 12 17:42:55.873810 containerd[1526]: time="2025-12-12T17:42:55.873783797Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:55.877488 containerd[1526]: time="2025-12-12T17:42:55.876285167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:55.877687 containerd[1526]: time="2025-12-12T17:42:55.877649408Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.305092714s" Dec 12 17:42:55.877726 containerd[1526]: time="2025-12-12T17:42:55.877697280Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 12 17:43:00.199714 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 17:43:00.201102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:43:00.220000 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 17:43:00.220076 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 17:43:00.220368 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:43:00.224715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:43:00.245343 systemd[1]: Reload requested from client PID 2208 ('systemctl') (unit session-7.scope)... Dec 12 17:43:00.245363 systemd[1]: Reloading... Dec 12 17:43:00.319554 zram_generator::config[2254]: No configuration found. Dec 12 17:43:00.526522 systemd[1]: Reloading finished in 280 ms. Dec 12 17:43:00.595760 (kubelet)[2286]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:43:00.600809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:43:00.613788 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:43:00.615264 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:43:00.615501 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:43:00.615553 systemd[1]: kubelet.service: Consumed 117ms CPU time, 102.7M memory peak. Dec 12 17:43:00.617344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:43:00.779288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:43:00.784614 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:43:00.827082 kubelet[2305]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:43:00.827082 kubelet[2305]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:43:00.827082 kubelet[2305]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:43:00.827495 kubelet[2305]: I1212 17:43:00.827146 2305 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:43:01.667250 kubelet[2305]: I1212 17:43:01.665606 2305 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 17:43:01.667250 kubelet[2305]: I1212 17:43:01.665775 2305 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:43:01.667250 kubelet[2305]: I1212 17:43:01.666093 2305 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 17:43:01.691280 kubelet[2305]: E1212 17:43:01.691217 2305 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.114:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:43:01.693134 kubelet[2305]: I1212 17:43:01.693001 2305 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:43:01.700369 kubelet[2305]: I1212 17:43:01.700344 2305 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:43:01.703505 kubelet[2305]: I1212 17:43:01.703478 2305 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:43:01.704779 kubelet[2305]: I1212 17:43:01.704706 2305 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:43:01.704972 kubelet[2305]: I1212 17:43:01.704765 2305 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:43:01.705087 kubelet[2305]: I1212 17:43:01.705007 2305 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:43:01.705087 kubelet[2305]: I1212 17:43:01.705017 2305 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 17:43:01.705233 kubelet[2305]: I1212 17:43:01.705200 2305 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:43:01.707755 kubelet[2305]: I1212 17:43:01.707621 2305 kubelet.go:446] "Attempting to sync node with API server" Dec 12 17:43:01.707755 kubelet[2305]: I1212 17:43:01.707651 2305 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:43:01.707755 kubelet[2305]: I1212 17:43:01.707684 2305 kubelet.go:352] "Adding apiserver pod source" Dec 12 17:43:01.707755 kubelet[2305]: I1212 17:43:01.707696 2305 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:43:01.711278 kubelet[2305]: I1212 17:43:01.711258 2305 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:43:01.711351 kubelet[2305]: W1212 17:43:01.711295 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Dec 12 17:43:01.711374 kubelet[2305]: E1212 17:43:01.711353 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:43:01.711945 kubelet[2305]: W1212 17:43:01.711908 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Dec 12 17:43:01.711992 kubelet[2305]: E1212 17:43:01.711958 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:43:01.712399 kubelet[2305]: I1212 17:43:01.712365 2305 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 17:43:01.712511 kubelet[2305]: W1212 17:43:01.712499 2305 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:43:01.713428 kubelet[2305]: I1212 17:43:01.713410 2305 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:43:01.713501 kubelet[2305]: I1212 17:43:01.713445 2305 server.go:1287] "Started kubelet" Dec 12 17:43:01.713749 kubelet[2305]: I1212 17:43:01.713708 2305 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:43:01.714575 kubelet[2305]: I1212 17:43:01.714551 2305 server.go:479] "Adding debug handlers to kubelet server" Dec 12 17:43:01.718034 kubelet[2305]: I1212 17:43:01.717956 2305 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:43:01.718359 kubelet[2305]: I1212 17:43:01.718339 2305 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:43:01.718543 kubelet[2305]: I1212 17:43:01.718521 2305 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:43:01.720372 kubelet[2305]: I1212 17:43:01.718710 2305 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:43:01.720372 kubelet[2305]: E1212 17:43:01.718991 2305 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.114:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.114:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188088b8ea2d318f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 17:43:01.713424783 +0000 UTC m=+0.924935134,LastTimestamp:2025-12-12 17:43:01.713424783 +0000 UTC m=+0.924935134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 17:43:01.720372 kubelet[2305]: I1212 17:43:01.719970 2305 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:43:01.720372 kubelet[2305]: E1212 17:43:01.720036 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:43:01.720372 kubelet[2305]: I1212 17:43:01.720064 2305 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:43:01.720372 kubelet[2305]: I1212 17:43:01.720096 2305 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:43:01.720606 kubelet[2305]: I1212 17:43:01.720540 2305 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:43:01.721394 kubelet[2305]: E1212 17:43:01.721369 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="200ms" Dec 12 17:43:01.721613 kubelet[2305]: W1212 17:43:01.721569 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Dec 12 17:43:01.721655 kubelet[2305]: E1212 17:43:01.721619 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:43:01.721743 kubelet[2305]: E1212 17:43:01.721721 2305 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:43:01.721873 kubelet[2305]: I1212 17:43:01.721829 2305 factory.go:221] Registration of the containerd container factory successfully Dec 12 17:43:01.721873 kubelet[2305]: I1212 17:43:01.721839 2305 factory.go:221] Registration of the systemd container factory successfully Dec 12 17:43:01.731858 kubelet[2305]: I1212 17:43:01.731818 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 17:43:01.734067 kubelet[2305]: I1212 17:43:01.734046 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 17:43:01.734301 kubelet[2305]: I1212 17:43:01.734287 2305 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 17:43:01.734405 kubelet[2305]: I1212 17:43:01.734385 2305 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:43:01.734480 kubelet[2305]: I1212 17:43:01.734459 2305 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 17:43:01.734580 kubelet[2305]: E1212 17:43:01.734562 2305 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:43:01.734638 kubelet[2305]: I1212 17:43:01.734302 2305 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:43:01.734680 kubelet[2305]: I1212 17:43:01.734672 2305 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:43:01.734730 kubelet[2305]: I1212 17:43:01.734722 2305 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:43:01.734905 kubelet[2305]: W1212 17:43:01.734739 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Dec 12 17:43:01.734945 kubelet[2305]: E1212 17:43:01.734911 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:43:01.758685 kubelet[2305]: I1212 17:43:01.758655 2305 policy_none.go:49] "None policy: Start" Dec 12 17:43:01.758849 kubelet[2305]: I1212 17:43:01.758837 2305 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:43:01.758900 kubelet[2305]: I1212 17:43:01.758892 2305 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:43:01.764402 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:43:01.781609 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:43:01.803218 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:43:01.804683 kubelet[2305]: I1212 17:43:01.804605 2305 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 17:43:01.805108 kubelet[2305]: I1212 17:43:01.804801 2305 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:43:01.805108 kubelet[2305]: I1212 17:43:01.804816 2305 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:43:01.805108 kubelet[2305]: I1212 17:43:01.805058 2305 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:43:01.806132 kubelet[2305]: E1212 17:43:01.806060 2305 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:43:01.806132 kubelet[2305]: E1212 17:43:01.806094 2305 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 12 17:43:01.842729 systemd[1]: Created slice kubepods-burstable-podf2c4d812444250fb5a11670c1361141f.slice - libcontainer container kubepods-burstable-podf2c4d812444250fb5a11670c1361141f.slice. Dec 12 17:43:01.863486 kubelet[2305]: E1212 17:43:01.863370 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:43:01.865693 systemd[1]: Created slice kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice - libcontainer container kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice. Dec 12 17:43:01.867435 kubelet[2305]: E1212 17:43:01.867390 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:43:01.869868 systemd[1]: Created slice kubepods-burstable-pod0a68423804124305a9de061f38780871.slice - libcontainer container kubepods-burstable-pod0a68423804124305a9de061f38780871.slice. Dec 12 17:43:01.871578 kubelet[2305]: E1212 17:43:01.871552 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:43:01.906655 kubelet[2305]: I1212 17:43:01.906615 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:43:01.907140 kubelet[2305]: E1212 17:43:01.907083 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Dec 12 17:43:01.922940 kubelet[2305]: E1212 17:43:01.922819 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="400ms" Dec 12 17:43:02.021213 kubelet[2305]: I1212 17:43:02.021158 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2c4d812444250fb5a11670c1361141f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f2c4d812444250fb5a11670c1361141f\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:43:02.021213 kubelet[2305]: I1212 17:43:02.021208 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2c4d812444250fb5a11670c1361141f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f2c4d812444250fb5a11670c1361141f\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:43:02.021375 kubelet[2305]: I1212 17:43:02.021232 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:43:02.021375 kubelet[2305]: I1212 17:43:02.021250 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:43:02.021375 kubelet[2305]: I1212 17:43:02.021305 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:43:02.021375 kubelet[2305]: I1212 17:43:02.021336 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:43:02.021452 kubelet[2305]: I1212 17:43:02.021382 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:43:02.021452 kubelet[2305]: I1212 17:43:02.021409 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2c4d812444250fb5a11670c1361141f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f2c4d812444250fb5a11670c1361141f\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:43:02.021452 kubelet[2305]: I1212 17:43:02.021436 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:43:02.108552 kubelet[2305]: I1212 17:43:02.108526 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:43:02.108895 kubelet[2305]: E1212 17:43:02.108855 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Dec 12 17:43:02.165304 containerd[1526]: time="2025-12-12T17:43:02.165248351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f2c4d812444250fb5a11670c1361141f,Namespace:kube-system,Attempt:0,}" Dec 12 17:43:02.169137 containerd[1526]: time="2025-12-12T17:43:02.168926088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,}" Dec 12 17:43:02.173364 containerd[1526]: time="2025-12-12T17:43:02.173088952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,}" Dec 12 17:43:02.188689 containerd[1526]: time="2025-12-12T17:43:02.188645757Z" level=info msg="connecting to shim 3c89ffe7e20835f0ae1ef86488ddcb5a9a6ec1f1246e5b7bd7ad9abbf9649909" address="unix:///run/containerd/s/62ae38ab94ee6e247ca67acdc93ea2457723dd4d6da31febdc2424721174e91d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:43:02.208013 containerd[1526]: time="2025-12-12T17:43:02.207958903Z" level=info msg="connecting to shim 4884ee03ca211ba567886e8256fbb3707d67e4e70c43b4c8d89bcfcd6f53055f" address="unix:///run/containerd/s/0bc59344d6e2d3280b61cfbe4caa88e5ad11e83872b70b7fe8a22eb7ba6f065e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:43:02.210648 containerd[1526]: time="2025-12-12T17:43:02.210612981Z" level=info msg="connecting to shim d29053b41d2571d88142aa798c13eb8d86b44ead32c1c0accc98aba156156fea" address="unix:///run/containerd/s/6ebb5aced2ffc6d0bf06eb68e8d21fad2959ebb04cc73f586b4492552e1d345b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:43:02.225659 systemd[1]: Started cri-containerd-3c89ffe7e20835f0ae1ef86488ddcb5a9a6ec1f1246e5b7bd7ad9abbf9649909.scope - libcontainer container 3c89ffe7e20835f0ae1ef86488ddcb5a9a6ec1f1246e5b7bd7ad9abbf9649909. Dec 12 17:43:02.231233 systemd[1]: Started cri-containerd-4884ee03ca211ba567886e8256fbb3707d67e4e70c43b4c8d89bcfcd6f53055f.scope - libcontainer container 4884ee03ca211ba567886e8256fbb3707d67e4e70c43b4c8d89bcfcd6f53055f. Dec 12 17:43:02.248643 systemd[1]: Started cri-containerd-d29053b41d2571d88142aa798c13eb8d86b44ead32c1c0accc98aba156156fea.scope - libcontainer container d29053b41d2571d88142aa798c13eb8d86b44ead32c1c0accc98aba156156fea. Dec 12 17:43:02.279095 containerd[1526]: time="2025-12-12T17:43:02.278960681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f2c4d812444250fb5a11670c1361141f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c89ffe7e20835f0ae1ef86488ddcb5a9a6ec1f1246e5b7bd7ad9abbf9649909\"" Dec 12 17:43:02.283127 containerd[1526]: time="2025-12-12T17:43:02.283092015Z" level=info msg="CreateContainer within sandbox \"3c89ffe7e20835f0ae1ef86488ddcb5a9a6ec1f1246e5b7bd7ad9abbf9649909\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:43:02.288287 containerd[1526]: time="2025-12-12T17:43:02.288247649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4884ee03ca211ba567886e8256fbb3707d67e4e70c43b4c8d89bcfcd6f53055f\"" Dec 12 17:43:02.291651 containerd[1526]: time="2025-12-12T17:43:02.291609206Z" level=info msg="CreateContainer within sandbox \"4884ee03ca211ba567886e8256fbb3707d67e4e70c43b4c8d89bcfcd6f53055f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:43:02.296339 containerd[1526]: time="2025-12-12T17:43:02.296225908Z" level=info msg="Container cf9d5ff487f9beed8e3b73e2e623cfa12182a020cfe0ef4b1311fa31449c9454: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:43:02.302183 containerd[1526]: time="2025-12-12T17:43:02.302148034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,} returns sandbox id \"d29053b41d2571d88142aa798c13eb8d86b44ead32c1c0accc98aba156156fea\"" Dec 12 17:43:02.305093 containerd[1526]: time="2025-12-12T17:43:02.305061282Z" level=info msg="CreateContainer within sandbox \"d29053b41d2571d88142aa798c13eb8d86b44ead32c1c0accc98aba156156fea\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:43:02.309504 containerd[1526]: time="2025-12-12T17:43:02.308735653Z" level=info msg="Container 5d1d4f92a396470e54d60a5cb6fc027a92ae28f0dce26324833c76b7da9694b6: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:43:02.323945 kubelet[2305]: E1212 17:43:02.323892 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="800ms" Dec 12 17:43:02.338356 containerd[1526]: time="2025-12-12T17:43:02.338312593Z" level=info msg="CreateContainer within sandbox \"3c89ffe7e20835f0ae1ef86488ddcb5a9a6ec1f1246e5b7bd7ad9abbf9649909\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cf9d5ff487f9beed8e3b73e2e623cfa12182a020cfe0ef4b1311fa31449c9454\"" Dec 12 17:43:02.339123 containerd[1526]: time="2025-12-12T17:43:02.339094670Z" level=info msg="StartContainer for \"cf9d5ff487f9beed8e3b73e2e623cfa12182a020cfe0ef4b1311fa31449c9454\"" Dec 12 17:43:02.340285 containerd[1526]: time="2025-12-12T17:43:02.340261756Z" level=info msg="connecting to shim cf9d5ff487f9beed8e3b73e2e623cfa12182a020cfe0ef4b1311fa31449c9454" address="unix:///run/containerd/s/62ae38ab94ee6e247ca67acdc93ea2457723dd4d6da31febdc2424721174e91d" protocol=ttrpc version=3 Dec 12 17:43:02.362687 systemd[1]: Started cri-containerd-cf9d5ff487f9beed8e3b73e2e623cfa12182a020cfe0ef4b1311fa31449c9454.scope - libcontainer container cf9d5ff487f9beed8e3b73e2e623cfa12182a020cfe0ef4b1311fa31449c9454. Dec 12 17:43:02.449451 containerd[1526]: time="2025-12-12T17:43:02.449235831Z" level=info msg="CreateContainer within sandbox \"4884ee03ca211ba567886e8256fbb3707d67e4e70c43b4c8d89bcfcd6f53055f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5d1d4f92a396470e54d60a5cb6fc027a92ae28f0dce26324833c76b7da9694b6\"" Dec 12 17:43:02.449451 containerd[1526]: time="2025-12-12T17:43:02.449410027Z" level=info msg="StartContainer for \"cf9d5ff487f9beed8e3b73e2e623cfa12182a020cfe0ef4b1311fa31449c9454\" returns successfully" Dec 12 17:43:02.451130 containerd[1526]: time="2025-12-12T17:43:02.451102704Z" level=info msg="StartContainer for \"5d1d4f92a396470e54d60a5cb6fc027a92ae28f0dce26324833c76b7da9694b6\"" Dec 12 17:43:02.452269 containerd[1526]: time="2025-12-12T17:43:02.452218709Z" level=info msg="connecting to shim 5d1d4f92a396470e54d60a5cb6fc027a92ae28f0dce26324833c76b7da9694b6" address="unix:///run/containerd/s/0bc59344d6e2d3280b61cfbe4caa88e5ad11e83872b70b7fe8a22eb7ba6f065e" protocol=ttrpc version=3 Dec 12 17:43:02.475266 containerd[1526]: time="2025-12-12T17:43:02.475215281Z" level=info msg="Container 4e10aa7f6483b34ea22bf974e99d82970c5eeaaee774df75d3b163dd1b822361: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:43:02.478696 systemd[1]: Started cri-containerd-5d1d4f92a396470e54d60a5cb6fc027a92ae28f0dce26324833c76b7da9694b6.scope - libcontainer container 5d1d4f92a396470e54d60a5cb6fc027a92ae28f0dce26324833c76b7da9694b6. Dec 12 17:43:02.485762 containerd[1526]: time="2025-12-12T17:43:02.485702788Z" level=info msg="CreateContainer within sandbox \"d29053b41d2571d88142aa798c13eb8d86b44ead32c1c0accc98aba156156fea\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4e10aa7f6483b34ea22bf974e99d82970c5eeaaee774df75d3b163dd1b822361\"" Dec 12 17:43:02.486266 containerd[1526]: time="2025-12-12T17:43:02.486236913Z" level=info msg="StartContainer for \"4e10aa7f6483b34ea22bf974e99d82970c5eeaaee774df75d3b163dd1b822361\"" Dec 12 17:43:02.487534 containerd[1526]: time="2025-12-12T17:43:02.487501473Z" level=info msg="connecting to shim 4e10aa7f6483b34ea22bf974e99d82970c5eeaaee774df75d3b163dd1b822361" address="unix:///run/containerd/s/6ebb5aced2ffc6d0bf06eb68e8d21fad2959ebb04cc73f586b4492552e1d345b" protocol=ttrpc version=3 Dec 12 17:43:02.510638 systemd[1]: Started cri-containerd-4e10aa7f6483b34ea22bf974e99d82970c5eeaaee774df75d3b163dd1b822361.scope - libcontainer container 4e10aa7f6483b34ea22bf974e99d82970c5eeaaee774df75d3b163dd1b822361. Dec 12 17:43:02.511629 kubelet[2305]: I1212 17:43:02.511402 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:43:02.531508 containerd[1526]: time="2025-12-12T17:43:02.531448901Z" level=info msg="StartContainer for \"5d1d4f92a396470e54d60a5cb6fc027a92ae28f0dce26324833c76b7da9694b6\" returns successfully" Dec 12 17:43:02.554963 containerd[1526]: time="2025-12-12T17:43:02.554926834Z" level=info msg="StartContainer for \"4e10aa7f6483b34ea22bf974e99d82970c5eeaaee774df75d3b163dd1b822361\" returns successfully" Dec 12 17:43:02.740562 kubelet[2305]: E1212 17:43:02.740208 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:43:02.741523 kubelet[2305]: E1212 17:43:02.741502 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:43:02.744503 kubelet[2305]: E1212 17:43:02.744405 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:43:03.748494 kubelet[2305]: E1212 17:43:03.748189 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:43:03.748889 kubelet[2305]: E1212 17:43:03.748860 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:43:03.750391 kubelet[2305]: E1212 17:43:03.750353 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:43:04.150138 kubelet[2305]: E1212 17:43:04.150091 2305 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 12 17:43:04.245301 kubelet[2305]: I1212 17:43:04.245254 2305 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:43:04.269937 kubelet[2305]: E1212 17:43:04.269807 2305 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188088b8ea2d318f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 17:43:01.713424783 +0000 UTC m=+0.924935134,LastTimestamp:2025-12-12 17:43:01.713424783 +0000 UTC m=+0.924935134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 17:43:04.321501 kubelet[2305]: I1212 17:43:04.320819 2305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:43:04.332052 kubelet[2305]: E1212 17:43:04.331928 2305 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188088b8eaabaafa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 17:43:01.721713402 +0000 UTC m=+0.933223752,LastTimestamp:2025-12-12 17:43:01.721713402 +0000 UTC m=+0.933223752,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 17:43:04.334252 kubelet[2305]: E1212 17:43:04.334203 2305 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 17:43:04.334252 kubelet[2305]: I1212 17:43:04.334242 2305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:43:04.336946 kubelet[2305]: E1212 17:43:04.336897 2305 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:43:04.336946 kubelet[2305]: I1212 17:43:04.336927 2305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:43:04.340540 kubelet[2305]: E1212 17:43:04.340503 2305 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 17:43:04.710130 kubelet[2305]: I1212 17:43:04.710080 2305 apiserver.go:52] "Watching apiserver" Dec 12 17:43:04.720679 kubelet[2305]: I1212 17:43:04.720641 2305 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:43:06.238503 systemd[1]: Reload requested from client PID 2578 ('systemctl') (unit session-7.scope)... Dec 12 17:43:06.238518 systemd[1]: Reloading... Dec 12 17:43:06.303529 zram_generator::config[2624]: No configuration found. Dec 12 17:43:06.471679 systemd[1]: Reloading finished in 232 ms. Dec 12 17:43:06.496976 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:43:06.510336 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:43:06.510602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:43:06.510661 systemd[1]: kubelet.service: Consumed 1.301s CPU time, 131M memory peak. Dec 12 17:43:06.512393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:43:06.671961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:43:06.681819 (kubelet)[2663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:43:06.722349 kubelet[2663]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:43:06.722349 kubelet[2663]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:43:06.722349 kubelet[2663]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:43:06.722833 kubelet[2663]: I1212 17:43:06.722428 2663 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:43:06.730061 kubelet[2663]: I1212 17:43:06.729981 2663 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 17:43:06.730061 kubelet[2663]: I1212 17:43:06.730054 2663 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:43:06.730354 kubelet[2663]: I1212 17:43:06.730335 2663 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 17:43:06.731677 kubelet[2663]: I1212 17:43:06.731647 2663 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 17:43:06.734053 kubelet[2663]: I1212 17:43:06.734019 2663 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:43:06.737714 kubelet[2663]: I1212 17:43:06.737691 2663 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:43:06.742253 kubelet[2663]: I1212 17:43:06.742212 2663 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:43:06.742460 kubelet[2663]: I1212 17:43:06.742413 2663 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:43:06.742662 kubelet[2663]: I1212 17:43:06.742440 2663 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:43:06.742662 kubelet[2663]: I1212 17:43:06.742649 2663 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:43:06.742662 kubelet[2663]: I1212 17:43:06.742659 2663 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 17:43:06.742800 kubelet[2663]: I1212 17:43:06.742697 2663 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:43:06.742850 kubelet[2663]: I1212 17:43:06.742836 2663 kubelet.go:446] "Attempting to sync node with API server" Dec 12 17:43:06.742850 kubelet[2663]: I1212 17:43:06.742850 2663 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:43:06.742905 kubelet[2663]: I1212 17:43:06.742870 2663 kubelet.go:352] "Adding apiserver pod source" Dec 12 17:43:06.742905 kubelet[2663]: I1212 17:43:06.742883 2663 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:43:06.743777 kubelet[2663]: I1212 17:43:06.743700 2663 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:43:06.745804 kubelet[2663]: I1212 17:43:06.745782 2663 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 17:43:06.746215 kubelet[2663]: I1212 17:43:06.746198 2663 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:43:06.746259 kubelet[2663]: I1212 17:43:06.746236 2663 server.go:1287] "Started kubelet" Dec 12 17:43:06.747597 kubelet[2663]: I1212 17:43:06.747527 2663 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:43:06.747597 kubelet[2663]: I1212 17:43:06.747764 2663 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:43:06.748310 kubelet[2663]: I1212 17:43:06.748163 2663 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:43:06.753919 kubelet[2663]: I1212 17:43:06.753888 2663 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:43:06.754115 kubelet[2663]: E1212 17:43:06.754094 2663 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:43:06.754418 kubelet[2663]: I1212 17:43:06.754395 2663 server.go:479] "Adding debug handlers to kubelet server" Dec 12 17:43:06.755323 kubelet[2663]: I1212 17:43:06.755266 2663 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:43:06.756941 kubelet[2663]: I1212 17:43:06.755593 2663 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:43:06.756941 kubelet[2663]: I1212 17:43:06.755461 2663 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:43:06.756941 kubelet[2663]: I1212 17:43:06.755716 2663 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:43:06.756941 kubelet[2663]: I1212 17:43:06.756638 2663 factory.go:221] Registration of the systemd container factory successfully Dec 12 17:43:06.756941 kubelet[2663]: I1212 17:43:06.756746 2663 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:43:06.761669 kubelet[2663]: I1212 17:43:06.761636 2663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 17:43:06.763359 kubelet[2663]: I1212 17:43:06.763327 2663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 17:43:06.763359 kubelet[2663]: I1212 17:43:06.763355 2663 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 17:43:06.763448 kubelet[2663]: I1212 17:43:06.763373 2663 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:43:06.763448 kubelet[2663]: I1212 17:43:06.763381 2663 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 17:43:06.763448 kubelet[2663]: E1212 17:43:06.763417 2663 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:43:06.771022 kubelet[2663]: E1212 17:43:06.770767 2663 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:43:06.776955 kubelet[2663]: I1212 17:43:06.776926 2663 factory.go:221] Registration of the containerd container factory successfully Dec 12 17:43:06.807788 kubelet[2663]: I1212 17:43:06.807763 2663 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:43:06.807788 kubelet[2663]: I1212 17:43:06.807781 2663 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:43:06.807927 kubelet[2663]: I1212 17:43:06.807803 2663 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:43:06.807976 kubelet[2663]: I1212 17:43:06.807959 2663 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:43:06.808000 kubelet[2663]: I1212 17:43:06.807974 2663 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:43:06.808000 kubelet[2663]: I1212 17:43:06.807992 2663 policy_none.go:49] "None policy: Start" Dec 12 17:43:06.808000 kubelet[2663]: I1212 17:43:06.808000 2663 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:43:06.808066 kubelet[2663]: I1212 17:43:06.808009 2663 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:43:06.808120 kubelet[2663]: I1212 17:43:06.808109 2663 state_mem.go:75] "Updated machine memory state" Dec 12 17:43:06.811877 kubelet[2663]: I1212 17:43:06.811804 2663 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 17:43:06.811970 kubelet[2663]: I1212 17:43:06.811951 2663 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:43:06.812002 kubelet[2663]: I1212 17:43:06.811968 2663 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:43:06.812191 kubelet[2663]: I1212 17:43:06.812169 2663 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:43:06.813593 kubelet[2663]: E1212 17:43:06.813560 2663 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:43:06.864549 kubelet[2663]: I1212 17:43:06.864507 2663 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:43:06.864671 kubelet[2663]: I1212 17:43:06.864554 2663 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:43:06.864780 kubelet[2663]: I1212 17:43:06.864757 2663 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:43:06.914333 kubelet[2663]: I1212 17:43:06.914249 2663 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:43:06.921282 kubelet[2663]: I1212 17:43:06.921158 2663 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 12 17:43:06.921282 kubelet[2663]: I1212 17:43:06.921267 2663 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:43:06.956544 kubelet[2663]: I1212 17:43:06.956448 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:43:06.956834 kubelet[2663]: I1212 17:43:06.956707 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2c4d812444250fb5a11670c1361141f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f2c4d812444250fb5a11670c1361141f\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:43:06.956834 kubelet[2663]: I1212 17:43:06.956740 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2c4d812444250fb5a11670c1361141f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f2c4d812444250fb5a11670c1361141f\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:43:06.956834 kubelet[2663]: I1212 17:43:06.956792 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:43:06.956834 kubelet[2663]: I1212 17:43:06.956809 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:43:06.957039 kubelet[2663]: I1212 17:43:06.956824 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:43:06.957039 kubelet[2663]: I1212 17:43:06.956982 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:43:06.957039 kubelet[2663]: I1212 17:43:06.956999 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:43:06.957039 kubelet[2663]: I1212 17:43:06.957013 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2c4d812444250fb5a11670c1361141f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f2c4d812444250fb5a11670c1361141f\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:43:07.239314 sudo[2702]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 12 17:43:07.239606 sudo[2702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 12 17:43:07.561432 sudo[2702]: pam_unix(sudo:session): session closed for user root Dec 12 17:43:07.743987 kubelet[2663]: I1212 17:43:07.743951 2663 apiserver.go:52] "Watching apiserver" Dec 12 17:43:07.756329 kubelet[2663]: I1212 17:43:07.756291 2663 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:43:07.806085 kubelet[2663]: I1212 17:43:07.806016 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.805998501 podStartE2EDuration="1.805998501s" podCreationTimestamp="2025-12-12 17:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:43:07.805818355 +0000 UTC m=+1.120721452" watchObservedRunningTime="2025-12-12 17:43:07.805998501 +0000 UTC m=+1.120901598" Dec 12 17:43:07.813989 kubelet[2663]: I1212 17:43:07.813822 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8138072809999999 podStartE2EDuration="1.813807281s" podCreationTimestamp="2025-12-12 17:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:43:07.813490063 +0000 UTC m=+1.128393200" watchObservedRunningTime="2025-12-12 17:43:07.813807281 +0000 UTC m=+1.128710418" Dec 12 17:43:07.821369 kubelet[2663]: I1212 17:43:07.820831 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.820820854 podStartE2EDuration="1.820820854s" podCreationTimestamp="2025-12-12 17:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:43:07.820600796 +0000 UTC m=+1.135503893" watchObservedRunningTime="2025-12-12 17:43:07.820820854 +0000 UTC m=+1.135723951" Dec 12 17:43:09.211550 sudo[1739]: pam_unix(sudo:session): session closed for user root Dec 12 17:43:09.212892 sshd[1738]: Connection closed by 10.0.0.1 port 56572 Dec 12 17:43:09.213846 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:09.217967 systemd[1]: sshd@6-10.0.0.114:22-10.0.0.1:56572.service: Deactivated successfully. Dec 12 17:43:09.220356 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:43:09.220731 systemd[1]: session-7.scope: Consumed 6.285s CPU time, 264.1M memory peak. Dec 12 17:43:09.223456 systemd-logind[1510]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:43:09.224522 systemd-logind[1510]: Removed session 7. Dec 12 17:43:11.043607 kubelet[2663]: I1212 17:43:11.043567 2663 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:43:11.046493 containerd[1526]: time="2025-12-12T17:43:11.044423754Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:43:11.046829 kubelet[2663]: I1212 17:43:11.044701 2663 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:43:12.057761 systemd[1]: Created slice kubepods-besteffort-pod6976fdda_166f_4731_9eb4_94328b9713bb.slice - libcontainer container kubepods-besteffort-pod6976fdda_166f_4731_9eb4_94328b9713bb.slice. Dec 12 17:43:12.071200 systemd[1]: Created slice kubepods-burstable-pod570756bd_fcd7_432f_a194_78279a547fff.slice - libcontainer container kubepods-burstable-pod570756bd_fcd7_432f_a194_78279a547fff.slice. Dec 12 17:43:12.087736 kubelet[2663]: I1212 17:43:12.087682 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-cilium-run\") pod \"cilium-wptmn\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " pod="kube-system/cilium-wptmn" Dec 12 17:43:12.088098 kubelet[2663]: I1212 17:43:12.087768 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-cilium-cgroup\") pod \"cilium-wptmn\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " pod="kube-system/cilium-wptmn" Dec 12 17:43:12.088098 kubelet[2663]: I1212 17:43:12.087786 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-hostproc\") pod \"cilium-wptmn\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " pod="kube-system/cilium-wptmn" Dec 12 17:43:12.088098 kubelet[2663]: I1212 17:43:12.087833 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-etc-cni-netd\") pod \"cilium-wptmn\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " pod="kube-system/cilium-wptmn" Dec 12 17:43:12.088098 kubelet[2663]: I1212 17:43:12.087856 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-xtables-lock\") pod \"cilium-wptmn\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " pod="kube-system/cilium-wptmn" Dec 12 17:43:12.088098 kubelet[2663]: I1212 17:43:12.087872 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/570756bd-fcd7-432f-a194-78279a547fff-cilium-config-path\") pod \"cilium-wptmn\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " pod="kube-system/cilium-wptmn" Dec 12 17:43:12.088098 kubelet[2663]: I1212 17:43:12.087981 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lp7w\" (UniqueName: \"kubernetes.io/projected/570756bd-fcd7-432f-a194-78279a547fff-kube-api-access-4lp7w\") pod \"cilium-wptmn\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " pod="kube-system/cilium-wptmn" Dec 12 17:43:12.088246 kubelet[2663]: I1212 17:43:12.088009 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6976fdda-166f-4731-9eb4-94328b9713bb-kube-proxy\") pod \"kube-proxy-d6p6n\" (UID: \"6976fdda-166f-4731-9eb4-94328b9713bb\") " pod="kube-system/kube-proxy-d6p6n" Dec 12 17:43:12.088246 kubelet[2663]: I1212 17:43:12.088027 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6976fdda-166f-4731-9eb4-94328b9713bb-xtables-lock\") pod \"kube-proxy-d6p6n\" (UID: \"6976fdda-166f-4731-9eb4-94328b9713bb\") " pod="kube-system/kube-proxy-d6p6n" Dec 12 17:43:12.088246 kubelet[2663]: I1212 17:43:12.088054 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4mdg\" (UniqueName: \"kubernetes.io/projected/6976fdda-166f-4731-9eb4-94328b9713bb-kube-api-access-x4mdg\") pod \"kube-proxy-d6p6n\" (UID: \"6976fdda-166f-4731-9eb4-94328b9713bb\") " pod="kube-system/kube-proxy-d6p6n" Dec 12 17:43:12.088246 kubelet[2663]: I1212 17:43:12.088096 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-lib-modules\") pod \"cilium-wptmn\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " pod="kube-system/cilium-wptmn" Dec 12 17:43:12.088246 kubelet[2663]: I1212 17:43:12.088124 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-host-proc-sys-kernel\") pod \"cilium-wptmn\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " pod="kube-system/cilium-wptmn" Dec 12 17:43:12.088347 kubelet[2663]: I1212 17:43:12.088142 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-host-proc-sys-net\") pod \"cilium-wptmn\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " pod="kube-system/cilium-wptmn" Dec 12 17:43:12.088347 kubelet[2663]: I1212 17:43:12.088170 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6976fdda-166f-4731-9eb4-94328b9713bb-lib-modules\") pod \"kube-proxy-d6p6n\" (UID: \"6976fdda-166f-4731-9eb4-94328b9713bb\") " pod="kube-system/kube-proxy-d6p6n" Dec 12 17:43:12.088347 kubelet[2663]: I1212 17:43:12.088216 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-bpf-maps\") pod \"cilium-wptmn\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " pod="kube-system/cilium-wptmn" Dec 12 17:43:12.088347 kubelet[2663]: I1212 17:43:12.088251 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-cni-path\") pod \"cilium-wptmn\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " pod="kube-system/cilium-wptmn" Dec 12 17:43:12.088347 kubelet[2663]: I1212 17:43:12.088268 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/570756bd-fcd7-432f-a194-78279a547fff-clustermesh-secrets\") pod \"cilium-wptmn\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " pod="kube-system/cilium-wptmn" Dec 12 17:43:12.088347 kubelet[2663]: I1212 17:43:12.088286 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/570756bd-fcd7-432f-a194-78279a547fff-hubble-tls\") pod \"cilium-wptmn\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " pod="kube-system/cilium-wptmn" Dec 12 17:43:12.158090 systemd[1]: Created slice kubepods-besteffort-pod4e797a90_7add_4a3d_a8d7_34cf16809b8f.slice - libcontainer container kubepods-besteffort-pod4e797a90_7add_4a3d_a8d7_34cf16809b8f.slice. Dec 12 17:43:12.189320 kubelet[2663]: I1212 17:43:12.189274 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l8bs\" (UniqueName: \"kubernetes.io/projected/4e797a90-7add-4a3d-a8d7-34cf16809b8f-kube-api-access-9l8bs\") pod \"cilium-operator-6c4d7847fc-9lmgt\" (UID: \"4e797a90-7add-4a3d-a8d7-34cf16809b8f\") " pod="kube-system/cilium-operator-6c4d7847fc-9lmgt" Dec 12 17:43:12.190694 kubelet[2663]: I1212 17:43:12.190644 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e797a90-7add-4a3d-a8d7-34cf16809b8f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-9lmgt\" (UID: \"4e797a90-7add-4a3d-a8d7-34cf16809b8f\") " pod="kube-system/cilium-operator-6c4d7847fc-9lmgt" Dec 12 17:43:12.367807 containerd[1526]: time="2025-12-12T17:43:12.367127862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d6p6n,Uid:6976fdda-166f-4731-9eb4-94328b9713bb,Namespace:kube-system,Attempt:0,}" Dec 12 17:43:12.376451 containerd[1526]: time="2025-12-12T17:43:12.376401494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wptmn,Uid:570756bd-fcd7-432f-a194-78279a547fff,Namespace:kube-system,Attempt:0,}" Dec 12 17:43:12.434051 containerd[1526]: time="2025-12-12T17:43:12.434007264Z" level=info msg="connecting to shim 3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25" address="unix:///run/containerd/s/1c9a2aa1abdc2b54c1831421f55378f10afc6a96e921ee02cedb2fedb66eebc4" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:43:12.434690 containerd[1526]: time="2025-12-12T17:43:12.434636907Z" level=info msg="connecting to shim c21bffbe9682eced403272a787ca7a6b0f907bcaf4bef741431bf4da6ef0d2a8" address="unix:///run/containerd/s/48f855adb16272662e958e2f7c5636e2fb24834754459baf61de796fea6f1596" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:43:12.458671 systemd[1]: Started cri-containerd-3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25.scope - libcontainer container 3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25. Dec 12 17:43:12.462989 systemd[1]: Started cri-containerd-c21bffbe9682eced403272a787ca7a6b0f907bcaf4bef741431bf4da6ef0d2a8.scope - libcontainer container c21bffbe9682eced403272a787ca7a6b0f907bcaf4bef741431bf4da6ef0d2a8. Dec 12 17:43:12.463242 containerd[1526]: time="2025-12-12T17:43:12.462840751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9lmgt,Uid:4e797a90-7add-4a3d-a8d7-34cf16809b8f,Namespace:kube-system,Attempt:0,}" Dec 12 17:43:12.512380 containerd[1526]: time="2025-12-12T17:43:12.512276304Z" level=info msg="connecting to shim 328c54b766c807a12c1667572737c399ee3b82859994271196780828ae2f522a" address="unix:///run/containerd/s/c992c4c61e6fb8ee31e1e61ca59d59b860270c8cfaf87aa4e6d835fca27f7637" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:43:12.518461 containerd[1526]: time="2025-12-12T17:43:12.518406449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wptmn,Uid:570756bd-fcd7-432f-a194-78279a547fff,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25\"" Dec 12 17:43:12.520676 containerd[1526]: time="2025-12-12T17:43:12.520642059Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 12 17:43:12.532443 containerd[1526]: time="2025-12-12T17:43:12.532403958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d6p6n,Uid:6976fdda-166f-4731-9eb4-94328b9713bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c21bffbe9682eced403272a787ca7a6b0f907bcaf4bef741431bf4da6ef0d2a8\"" Dec 12 17:43:12.535439 containerd[1526]: time="2025-12-12T17:43:12.535385612Z" level=info msg="CreateContainer within sandbox \"c21bffbe9682eced403272a787ca7a6b0f907bcaf4bef741431bf4da6ef0d2a8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:43:12.544706 systemd[1]: Started cri-containerd-328c54b766c807a12c1667572737c399ee3b82859994271196780828ae2f522a.scope - libcontainer container 328c54b766c807a12c1667572737c399ee3b82859994271196780828ae2f522a. Dec 12 17:43:12.547071 containerd[1526]: time="2025-12-12T17:43:12.546983516Z" level=info msg="Container 77b027561f8229136ac06083958ecd09edddf6dbef09f609065f65ef0d6593e0: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:43:12.556021 containerd[1526]: time="2025-12-12T17:43:12.555976430Z" level=info msg="CreateContainer within sandbox \"c21bffbe9682eced403272a787ca7a6b0f907bcaf4bef741431bf4da6ef0d2a8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"77b027561f8229136ac06083958ecd09edddf6dbef09f609065f65ef0d6593e0\"" Dec 12 17:43:12.557839 containerd[1526]: time="2025-12-12T17:43:12.557796188Z" level=info msg="StartContainer for \"77b027561f8229136ac06083958ecd09edddf6dbef09f609065f65ef0d6593e0\"" Dec 12 17:43:12.560059 containerd[1526]: time="2025-12-12T17:43:12.560017028Z" level=info msg="connecting to shim 77b027561f8229136ac06083958ecd09edddf6dbef09f609065f65ef0d6593e0" address="unix:///run/containerd/s/48f855adb16272662e958e2f7c5636e2fb24834754459baf61de796fea6f1596" protocol=ttrpc version=3 Dec 12 17:43:12.583701 systemd[1]: Started cri-containerd-77b027561f8229136ac06083958ecd09edddf6dbef09f609065f65ef0d6593e0.scope - libcontainer container 77b027561f8229136ac06083958ecd09edddf6dbef09f609065f65ef0d6593e0. Dec 12 17:43:12.589090 containerd[1526]: time="2025-12-12T17:43:12.588984248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9lmgt,Uid:4e797a90-7add-4a3d-a8d7-34cf16809b8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"328c54b766c807a12c1667572737c399ee3b82859994271196780828ae2f522a\"" Dec 12 17:43:12.682461 containerd[1526]: time="2025-12-12T17:43:12.682348088Z" level=info msg="StartContainer for \"77b027561f8229136ac06083958ecd09edddf6dbef09f609065f65ef0d6593e0\" returns successfully" Dec 12 17:43:16.498042 kubelet[2663]: I1212 17:43:16.497853 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d6p6n" podStartSLOduration=4.495927908 podStartE2EDuration="4.495927908s" podCreationTimestamp="2025-12-12 17:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:43:12.813330623 +0000 UTC m=+6.128233760" watchObservedRunningTime="2025-12-12 17:43:16.495927908 +0000 UTC m=+9.810831045" Dec 12 17:43:18.844128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4069710807.mount: Deactivated successfully. Dec 12 17:43:20.167409 containerd[1526]: time="2025-12-12T17:43:20.167329708Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:43:20.168071 containerd[1526]: time="2025-12-12T17:43:20.168031907Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Dec 12 17:43:20.169006 containerd[1526]: time="2025-12-12T17:43:20.168963851Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:43:20.170963 containerd[1526]: time="2025-12-12T17:43:20.170738937Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.649921958s" Dec 12 17:43:20.170963 containerd[1526]: time="2025-12-12T17:43:20.170778155Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 12 17:43:20.181773 containerd[1526]: time="2025-12-12T17:43:20.181718886Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 12 17:43:20.189224 containerd[1526]: time="2025-12-12T17:43:20.189178236Z" level=info msg="CreateContainer within sandbox \"3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 17:43:20.203858 containerd[1526]: time="2025-12-12T17:43:20.203173555Z" level=info msg="Container 6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:43:20.208752 containerd[1526]: time="2025-12-12T17:43:20.208706789Z" level=info msg="CreateContainer within sandbox \"3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a\"" Dec 12 17:43:20.209381 containerd[1526]: time="2025-12-12T17:43:20.209353563Z" level=info msg="StartContainer for \"6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a\"" Dec 12 17:43:20.210715 containerd[1526]: time="2025-12-12T17:43:20.210679326Z" level=info msg="connecting to shim 6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a" address="unix:///run/containerd/s/1c9a2aa1abdc2b54c1831421f55378f10afc6a96e921ee02cedb2fedb66eebc4" protocol=ttrpc version=3 Dec 12 17:43:20.254746 systemd[1]: Started cri-containerd-6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a.scope - libcontainer container 6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a. Dec 12 17:43:20.279995 containerd[1526]: time="2025-12-12T17:43:20.279930552Z" level=info msg="StartContainer for \"6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a\" returns successfully" Dec 12 17:43:20.295120 systemd[1]: cri-containerd-6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a.scope: Deactivated successfully. Dec 12 17:43:20.331854 containerd[1526]: time="2025-12-12T17:43:20.331789516Z" level=info msg="received container exit event container_id:\"6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a\" id:\"6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a\" pid:3091 exited_at:{seconds:1765561400 nanos:322302846}" Dec 12 17:43:20.373083 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a-rootfs.mount: Deactivated successfully. Dec 12 17:43:20.824486 containerd[1526]: time="2025-12-12T17:43:20.824411156Z" level=info msg="CreateContainer within sandbox \"3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 17:43:20.831717 containerd[1526]: time="2025-12-12T17:43:20.831664772Z" level=info msg="Container 3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:43:20.837561 containerd[1526]: time="2025-12-12T17:43:20.837509388Z" level=info msg="CreateContainer within sandbox \"3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53\"" Dec 12 17:43:20.838149 containerd[1526]: time="2025-12-12T17:43:20.838111141Z" level=info msg="StartContainer for \"3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53\"" Dec 12 17:43:20.839023 containerd[1526]: time="2025-12-12T17:43:20.838981817Z" level=info msg="connecting to shim 3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53" address="unix:///run/containerd/s/1c9a2aa1abdc2b54c1831421f55378f10afc6a96e921ee02cedb2fedb66eebc4" protocol=ttrpc version=3 Dec 12 17:43:20.872000 systemd[1]: Started cri-containerd-3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53.scope - libcontainer container 3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53. Dec 12 17:43:20.917407 containerd[1526]: time="2025-12-12T17:43:20.917278433Z" level=info msg="StartContainer for \"3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53\" returns successfully" Dec 12 17:43:20.930673 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:43:20.930917 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:43:20.931339 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:43:20.932979 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:43:20.936113 systemd[1]: cri-containerd-3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53.scope: Deactivated successfully. Dec 12 17:43:20.937379 containerd[1526]: time="2025-12-12T17:43:20.937330945Z" level=info msg="received container exit event container_id:\"3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53\" id:\"3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53\" pid:3135 exited_at:{seconds:1765561400 nanos:937103562}" Dec 12 17:43:20.980021 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:43:21.414720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount862971356.mount: Deactivated successfully. Dec 12 17:43:21.828684 containerd[1526]: time="2025-12-12T17:43:21.828639077Z" level=info msg="CreateContainer within sandbox \"3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 17:43:21.844082 containerd[1526]: time="2025-12-12T17:43:21.844034280Z" level=info msg="Container da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:43:21.846627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1387179503.mount: Deactivated successfully. Dec 12 17:43:21.861838 containerd[1526]: time="2025-12-12T17:43:21.861789381Z" level=info msg="CreateContainer within sandbox \"3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5\"" Dec 12 17:43:21.862405 containerd[1526]: time="2025-12-12T17:43:21.862378155Z" level=info msg="StartContainer for \"da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5\"" Dec 12 17:43:21.864077 containerd[1526]: time="2025-12-12T17:43:21.864000254Z" level=info msg="connecting to shim da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5" address="unix:///run/containerd/s/1c9a2aa1abdc2b54c1831421f55378f10afc6a96e921ee02cedb2fedb66eebc4" protocol=ttrpc version=3 Dec 12 17:43:21.894707 systemd[1]: Started cri-containerd-da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5.scope - libcontainer container da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5. Dec 12 17:43:21.997446 containerd[1526]: time="2025-12-12T17:43:21.997366797Z" level=info msg="StartContainer for \"da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5\" returns successfully" Dec 12 17:43:22.001347 systemd[1]: cri-containerd-da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5.scope: Deactivated successfully. Dec 12 17:43:22.001846 systemd[1]: cri-containerd-da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5.scope: Consumed 39ms CPU time, 5.3M memory peak, 2M read from disk. Dec 12 17:43:22.010775 containerd[1526]: time="2025-12-12T17:43:22.010726922Z" level=info msg="received container exit event container_id:\"da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5\" id:\"da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5\" pid:3197 exited_at:{seconds:1765561402 nanos:10309151}" Dec 12 17:43:22.189945 containerd[1526]: time="2025-12-12T17:43:22.189549194Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:43:22.190767 containerd[1526]: time="2025-12-12T17:43:22.190545563Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Dec 12 17:43:22.191683 containerd[1526]: time="2025-12-12T17:43:22.191655097Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:43:22.193771 containerd[1526]: time="2025-12-12T17:43:22.193727787Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.011965721s" Dec 12 17:43:22.193919 containerd[1526]: time="2025-12-12T17:43:22.193900618Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 12 17:43:22.197316 containerd[1526]: time="2025-12-12T17:43:22.197275642Z" level=info msg="CreateContainer within sandbox \"328c54b766c807a12c1667572737c399ee3b82859994271196780828ae2f522a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 12 17:43:22.208177 containerd[1526]: time="2025-12-12T17:43:22.207460457Z" level=info msg="Container d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:43:22.212050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount538598158.mount: Deactivated successfully. Dec 12 17:43:22.214393 containerd[1526]: time="2025-12-12T17:43:22.214250081Z" level=info msg="CreateContainer within sandbox \"328c54b766c807a12c1667572737c399ee3b82859994271196780828ae2f522a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb\"" Dec 12 17:43:22.214942 containerd[1526]: time="2025-12-12T17:43:22.214900627Z" level=info msg="StartContainer for \"d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb\"" Dec 12 17:43:22.215962 containerd[1526]: time="2025-12-12T17:43:22.215929769Z" level=info msg="connecting to shim d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb" address="unix:///run/containerd/s/c992c4c61e6fb8ee31e1e61ca59d59b860270c8cfaf87aa4e6d835fca27f7637" protocol=ttrpc version=3 Dec 12 17:43:22.238686 systemd[1]: Started cri-containerd-d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb.scope - libcontainer container d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb. Dec 12 17:43:22.274181 containerd[1526]: time="2025-12-12T17:43:22.274068845Z" level=info msg="StartContainer for \"d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb\" returns successfully" Dec 12 17:43:22.443577 update_engine[1514]: I20251212 17:43:22.442509 1514 update_attempter.cc:509] Updating boot flags... Dec 12 17:43:22.837630 containerd[1526]: time="2025-12-12T17:43:22.837585910Z" level=info msg="CreateContainer within sandbox \"3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 17:43:22.852937 containerd[1526]: time="2025-12-12T17:43:22.852893025Z" level=info msg="Container 2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:43:22.868320 containerd[1526]: time="2025-12-12T17:43:22.868265607Z" level=info msg="CreateContainer within sandbox \"3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d\"" Dec 12 17:43:22.868798 containerd[1526]: time="2025-12-12T17:43:22.868772975Z" level=info msg="StartContainer for \"2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d\"" Dec 12 17:43:22.872720 containerd[1526]: time="2025-12-12T17:43:22.869902518Z" level=info msg="connecting to shim 2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d" address="unix:///run/containerd/s/1c9a2aa1abdc2b54c1831421f55378f10afc6a96e921ee02cedb2fedb66eebc4" protocol=ttrpc version=3 Dec 12 17:43:22.905640 systemd[1]: Started cri-containerd-2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d.scope - libcontainer container 2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d. Dec 12 17:43:22.934422 systemd[1]: cri-containerd-2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d.scope: Deactivated successfully. Dec 12 17:43:22.936812 containerd[1526]: time="2025-12-12T17:43:22.936665329Z" level=info msg="received container exit event container_id:\"2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d\" id:\"2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d\" pid:3289 exited_at:{seconds:1765561402 nanos:935569240}" Dec 12 17:43:22.948372 containerd[1526]: time="2025-12-12T17:43:22.948305701Z" level=info msg="StartContainer for \"2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d\" returns successfully" Dec 12 17:43:23.854437 containerd[1526]: time="2025-12-12T17:43:23.853798132Z" level=info msg="CreateContainer within sandbox \"3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 17:43:23.869633 kubelet[2663]: I1212 17:43:23.867886 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-9lmgt" podStartSLOduration=2.263666727 podStartE2EDuration="11.867867497s" podCreationTimestamp="2025-12-12 17:43:12 +0000 UTC" firstStartedPulling="2025-12-12 17:43:12.590443953 +0000 UTC m=+5.905347090" lastFinishedPulling="2025-12-12 17:43:22.194644723 +0000 UTC m=+15.509547860" observedRunningTime="2025-12-12 17:43:22.898125129 +0000 UTC m=+16.213028266" watchObservedRunningTime="2025-12-12 17:43:23.867867497 +0000 UTC m=+17.182770634" Dec 12 17:43:23.873892 containerd[1526]: time="2025-12-12T17:43:23.872953479Z" level=info msg="Container 7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:43:23.883204 containerd[1526]: time="2025-12-12T17:43:23.883136449Z" level=info msg="CreateContainer within sandbox \"3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da\"" Dec 12 17:43:23.888068 containerd[1526]: time="2025-12-12T17:43:23.888031717Z" level=info msg="StartContainer for \"7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da\"" Dec 12 17:43:23.890525 containerd[1526]: time="2025-12-12T17:43:23.890442617Z" level=info msg="connecting to shim 7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da" address="unix:///run/containerd/s/1c9a2aa1abdc2b54c1831421f55378f10afc6a96e921ee02cedb2fedb66eebc4" protocol=ttrpc version=3 Dec 12 17:43:23.923657 systemd[1]: Started cri-containerd-7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da.scope - libcontainer container 7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da. Dec 12 17:43:23.991690 containerd[1526]: time="2025-12-12T17:43:23.991635144Z" level=info msg="StartContainer for \"7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da\" returns successfully" Dec 12 17:43:24.125781 kubelet[2663]: I1212 17:43:24.125653 2663 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 17:43:24.170122 kubelet[2663]: I1212 17:43:24.170060 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sqw7\" (UniqueName: \"kubernetes.io/projected/acc63176-1956-4c94-ae39-8fdd569723de-kube-api-access-5sqw7\") pod \"coredns-668d6bf9bc-9qwvx\" (UID: \"acc63176-1956-4c94-ae39-8fdd569723de\") " pod="kube-system/coredns-668d6bf9bc-9qwvx" Dec 12 17:43:24.170122 kubelet[2663]: I1212 17:43:24.170119 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acc63176-1956-4c94-ae39-8fdd569723de-config-volume\") pod \"coredns-668d6bf9bc-9qwvx\" (UID: \"acc63176-1956-4c94-ae39-8fdd569723de\") " pod="kube-system/coredns-668d6bf9bc-9qwvx" Dec 12 17:43:24.172754 systemd[1]: Created slice kubepods-burstable-podacc63176_1956_4c94_ae39_8fdd569723de.slice - libcontainer container kubepods-burstable-podacc63176_1956_4c94_ae39_8fdd569723de.slice. Dec 12 17:43:24.195725 systemd[1]: Created slice kubepods-burstable-pod83a4e053_a71b_4157_a776_40b5f3d033fd.slice - libcontainer container kubepods-burstable-pod83a4e053_a71b_4157_a776_40b5f3d033fd.slice. Dec 12 17:43:24.270916 kubelet[2663]: I1212 17:43:24.270858 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83a4e053-a71b-4157-a776-40b5f3d033fd-config-volume\") pod \"coredns-668d6bf9bc-58fvb\" (UID: \"83a4e053-a71b-4157-a776-40b5f3d033fd\") " pod="kube-system/coredns-668d6bf9bc-58fvb" Dec 12 17:43:24.271059 kubelet[2663]: I1212 17:43:24.270936 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gv46\" (UniqueName: \"kubernetes.io/projected/83a4e053-a71b-4157-a776-40b5f3d033fd-kube-api-access-9gv46\") pod \"coredns-668d6bf9bc-58fvb\" (UID: \"83a4e053-a71b-4157-a776-40b5f3d033fd\") " pod="kube-system/coredns-668d6bf9bc-58fvb" Dec 12 17:43:24.478774 containerd[1526]: time="2025-12-12T17:43:24.478633492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9qwvx,Uid:acc63176-1956-4c94-ae39-8fdd569723de,Namespace:kube-system,Attempt:0,}" Dec 12 17:43:24.508919 containerd[1526]: time="2025-12-12T17:43:24.508544307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58fvb,Uid:83a4e053-a71b-4157-a776-40b5f3d033fd,Namespace:kube-system,Attempt:0,}" Dec 12 17:43:24.888499 kubelet[2663]: I1212 17:43:24.887260 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wptmn" podStartSLOduration=5.225610831 podStartE2EDuration="12.887241537s" podCreationTimestamp="2025-12-12 17:43:12 +0000 UTC" firstStartedPulling="2025-12-12 17:43:12.519885928 +0000 UTC m=+5.834789065" lastFinishedPulling="2025-12-12 17:43:20.181516674 +0000 UTC m=+13.496419771" observedRunningTime="2025-12-12 17:43:24.887235855 +0000 UTC m=+18.202138992" watchObservedRunningTime="2025-12-12 17:43:24.887241537 +0000 UTC m=+18.202144674" Dec 12 17:43:26.104591 systemd-networkd[1434]: cilium_host: Link UP Dec 12 17:43:26.104974 systemd-networkd[1434]: cilium_net: Link UP Dec 12 17:43:26.105111 systemd-networkd[1434]: cilium_net: Gained carrier Dec 12 17:43:26.105243 systemd-networkd[1434]: cilium_host: Gained carrier Dec 12 17:43:26.188023 systemd-networkd[1434]: cilium_vxlan: Link UP Dec 12 17:43:26.188030 systemd-networkd[1434]: cilium_vxlan: Gained carrier Dec 12 17:43:26.314311 systemd-networkd[1434]: cilium_host: Gained IPv6LL Dec 12 17:43:26.453513 kernel: NET: Registered PF_ALG protocol family Dec 12 17:43:27.076366 systemd-networkd[1434]: lxc_health: Link UP Dec 12 17:43:27.076632 systemd-networkd[1434]: lxc_health: Gained carrier Dec 12 17:43:27.099060 systemd-networkd[1434]: cilium_net: Gained IPv6LL Dec 12 17:43:27.559490 kernel: eth0: renamed from tmp993da Dec 12 17:43:27.573001 kernel: eth0: renamed from tmpda763 Dec 12 17:43:27.572093 systemd-networkd[1434]: cilium_vxlan: Gained IPv6LL Dec 12 17:43:27.572259 systemd-networkd[1434]: lxc21685200e65f: Link UP Dec 12 17:43:27.572488 systemd-networkd[1434]: lxc39acff729f08: Link UP Dec 12 17:43:27.572746 systemd-networkd[1434]: lxc21685200e65f: Gained carrier Dec 12 17:43:27.574897 systemd-networkd[1434]: lxc39acff729f08: Gained carrier Dec 12 17:43:29.081624 systemd-networkd[1434]: lxc_health: Gained IPv6LL Dec 12 17:43:29.273641 systemd-networkd[1434]: lxc21685200e65f: Gained IPv6LL Dec 12 17:43:29.657623 systemd-networkd[1434]: lxc39acff729f08: Gained IPv6LL Dec 12 17:43:31.244429 containerd[1526]: time="2025-12-12T17:43:31.244175982Z" level=info msg="connecting to shim da76334513fac73c41132f06ffcb458c02979cce8bf668486ec607bbbb116829" address="unix:///run/containerd/s/8af107ec0618b8122120858e252cd23b266bcbf8ba26d2dce4fb992f0ea5a707" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:43:31.244814 containerd[1526]: time="2025-12-12T17:43:31.244487345Z" level=info msg="connecting to shim 993dae4df02938196a013277001dc919c2fc1edaec43ae32214aa0eabdb0a4be" address="unix:///run/containerd/s/90be2eb6474feba591d6a50653d2743d0751388f66a5ab3a0200c24a3605921d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:43:31.269659 systemd[1]: Started cri-containerd-da76334513fac73c41132f06ffcb458c02979cce8bf668486ec607bbbb116829.scope - libcontainer container da76334513fac73c41132f06ffcb458c02979cce8bf668486ec607bbbb116829. Dec 12 17:43:31.273144 systemd[1]: Started cri-containerd-993dae4df02938196a013277001dc919c2fc1edaec43ae32214aa0eabdb0a4be.scope - libcontainer container 993dae4df02938196a013277001dc919c2fc1edaec43ae32214aa0eabdb0a4be. Dec 12 17:43:31.285606 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:43:31.286198 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:43:31.314644 containerd[1526]: time="2025-12-12T17:43:31.314599609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58fvb,Uid:83a4e053-a71b-4157-a776-40b5f3d033fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"993dae4df02938196a013277001dc919c2fc1edaec43ae32214aa0eabdb0a4be\"" Dec 12 17:43:31.316498 containerd[1526]: time="2025-12-12T17:43:31.316301265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9qwvx,Uid:acc63176-1956-4c94-ae39-8fdd569723de,Namespace:kube-system,Attempt:0,} returns sandbox id \"da76334513fac73c41132f06ffcb458c02979cce8bf668486ec607bbbb116829\"" Dec 12 17:43:31.318625 containerd[1526]: time="2025-12-12T17:43:31.318595480Z" level=info msg="CreateContainer within sandbox \"da76334513fac73c41132f06ffcb458c02979cce8bf668486ec607bbbb116829\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:43:31.319939 containerd[1526]: time="2025-12-12T17:43:31.319908552Z" level=info msg="CreateContainer within sandbox \"993dae4df02938196a013277001dc919c2fc1edaec43ae32214aa0eabdb0a4be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:43:31.329222 containerd[1526]: time="2025-12-12T17:43:31.328963178Z" level=info msg="Container 4d95ffcd719ba8f1eb56459308c53a4fb1a898284e747016b70aa494f49b2008: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:43:31.329496 containerd[1526]: time="2025-12-12T17:43:31.329449268Z" level=info msg="Container 18dca69b4c3fba407489dcc33f015be1554364ad95bce7b018f6ce6442933f15: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:43:31.336712 containerd[1526]: time="2025-12-12T17:43:31.336676364Z" level=info msg="CreateContainer within sandbox \"993dae4df02938196a013277001dc919c2fc1edaec43ae32214aa0eabdb0a4be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"18dca69b4c3fba407489dcc33f015be1554364ad95bce7b018f6ce6442933f15\"" Dec 12 17:43:31.337301 containerd[1526]: time="2025-12-12T17:43:31.337277645Z" level=info msg="StartContainer for \"18dca69b4c3fba407489dcc33f015be1554364ad95bce7b018f6ce6442933f15\"" Dec 12 17:43:31.338351 containerd[1526]: time="2025-12-12T17:43:31.338321925Z" level=info msg="connecting to shim 18dca69b4c3fba407489dcc33f015be1554364ad95bce7b018f6ce6442933f15" address="unix:///run/containerd/s/90be2eb6474feba591d6a50653d2743d0751388f66a5ab3a0200c24a3605921d" protocol=ttrpc version=3 Dec 12 17:43:31.342425 containerd[1526]: time="2025-12-12T17:43:31.342364008Z" level=info msg="CreateContainer within sandbox \"da76334513fac73c41132f06ffcb458c02979cce8bf668486ec607bbbb116829\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4d95ffcd719ba8f1eb56459308c53a4fb1a898284e747016b70aa494f49b2008\"" Dec 12 17:43:31.343121 containerd[1526]: time="2025-12-12T17:43:31.343098325Z" level=info msg="StartContainer for \"4d95ffcd719ba8f1eb56459308c53a4fb1a898284e747016b70aa494f49b2008\"" Dec 12 17:43:31.344192 containerd[1526]: time="2025-12-12T17:43:31.344157649Z" level=info msg="connecting to shim 4d95ffcd719ba8f1eb56459308c53a4fb1a898284e747016b70aa494f49b2008" address="unix:///run/containerd/s/8af107ec0618b8122120858e252cd23b266bcbf8ba26d2dce4fb992f0ea5a707" protocol=ttrpc version=3 Dec 12 17:43:31.361631 systemd[1]: Started cri-containerd-18dca69b4c3fba407489dcc33f015be1554364ad95bce7b018f6ce6442933f15.scope - libcontainer container 18dca69b4c3fba407489dcc33f015be1554364ad95bce7b018f6ce6442933f15. Dec 12 17:43:31.364349 systemd[1]: Started cri-containerd-4d95ffcd719ba8f1eb56459308c53a4fb1a898284e747016b70aa494f49b2008.scope - libcontainer container 4d95ffcd719ba8f1eb56459308c53a4fb1a898284e747016b70aa494f49b2008. Dec 12 17:43:31.398339 containerd[1526]: time="2025-12-12T17:43:31.398297354Z" level=info msg="StartContainer for \"18dca69b4c3fba407489dcc33f015be1554364ad95bce7b018f6ce6442933f15\" returns successfully" Dec 12 17:43:31.399098 containerd[1526]: time="2025-12-12T17:43:31.398904756Z" level=info msg="StartContainer for \"4d95ffcd719ba8f1eb56459308c53a4fb1a898284e747016b70aa494f49b2008\" returns successfully" Dec 12 17:43:31.889993 kubelet[2663]: I1212 17:43:31.889458 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9qwvx" podStartSLOduration=19.889438179 podStartE2EDuration="19.889438179s" podCreationTimestamp="2025-12-12 17:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:43:31.888110423 +0000 UTC m=+25.203013560" watchObservedRunningTime="2025-12-12 17:43:31.889438179 +0000 UTC m=+25.204341316" Dec 12 17:43:31.901914 kubelet[2663]: I1212 17:43:31.901846 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-58fvb" podStartSLOduration=19.901826738 podStartE2EDuration="19.901826738s" podCreationTimestamp="2025-12-12 17:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:43:31.90179765 +0000 UTC m=+25.216700787" watchObservedRunningTime="2025-12-12 17:43:31.901826738 +0000 UTC m=+25.216729875" Dec 12 17:43:32.223415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1309527843.mount: Deactivated successfully. Dec 12 17:43:35.451878 systemd[1]: Started sshd@7-10.0.0.114:22-10.0.0.1:54736.service - OpenSSH per-connection server daemon (10.0.0.1:54736). Dec 12 17:43:35.508060 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 54736 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:43:35.509664 sshd-session[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:35.517015 systemd-logind[1510]: New session 8 of user core. Dec 12 17:43:35.535658 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:43:35.657146 sshd[4015]: Connection closed by 10.0.0.1 port 54736 Dec 12 17:43:35.656186 sshd-session[4012]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:35.660034 systemd[1]: sshd@7-10.0.0.114:22-10.0.0.1:54736.service: Deactivated successfully. Dec 12 17:43:35.661475 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:43:35.662118 systemd-logind[1510]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:43:35.663176 systemd-logind[1510]: Removed session 8. Dec 12 17:43:40.672961 systemd[1]: Started sshd@8-10.0.0.114:22-10.0.0.1:54738.service - OpenSSH per-connection server daemon (10.0.0.1:54738). Dec 12 17:43:40.754279 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 54738 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:43:40.755728 sshd-session[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:40.763161 systemd-logind[1510]: New session 9 of user core. Dec 12 17:43:40.770694 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:43:40.890530 sshd[4035]: Connection closed by 10.0.0.1 port 54738 Dec 12 17:43:40.890197 sshd-session[4032]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:40.894124 systemd[1]: sshd@8-10.0.0.114:22-10.0.0.1:54738.service: Deactivated successfully. Dec 12 17:43:40.895975 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:43:40.898649 systemd-logind[1510]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:43:40.899595 systemd-logind[1510]: Removed session 9. Dec 12 17:43:45.906732 systemd[1]: Started sshd@9-10.0.0.114:22-10.0.0.1:37832.service - OpenSSH per-connection server daemon (10.0.0.1:37832). Dec 12 17:43:45.974630 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 37832 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:43:45.976033 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:45.980195 systemd-logind[1510]: New session 10 of user core. Dec 12 17:43:45.989670 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:43:46.117171 sshd[4058]: Connection closed by 10.0.0.1 port 37832 Dec 12 17:43:46.119269 sshd-session[4055]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:46.129612 systemd[1]: sshd@9-10.0.0.114:22-10.0.0.1:37832.service: Deactivated successfully. Dec 12 17:43:46.131172 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:43:46.131822 systemd-logind[1510]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:43:46.133188 systemd-logind[1510]: Removed session 10. Dec 12 17:43:46.134319 systemd[1]: Started sshd@10-10.0.0.114:22-10.0.0.1:37848.service - OpenSSH per-connection server daemon (10.0.0.1:37848). Dec 12 17:43:46.194247 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 37848 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:43:46.195752 sshd-session[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:46.200670 systemd-logind[1510]: New session 11 of user core. Dec 12 17:43:46.214626 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:43:46.378641 sshd[4075]: Connection closed by 10.0.0.1 port 37848 Dec 12 17:43:46.378516 sshd-session[4072]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:46.389853 systemd[1]: sshd@10-10.0.0.114:22-10.0.0.1:37848.service: Deactivated successfully. Dec 12 17:43:46.393085 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:43:46.394819 systemd-logind[1510]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:43:46.399772 systemd[1]: Started sshd@11-10.0.0.114:22-10.0.0.1:37854.service - OpenSSH per-connection server daemon (10.0.0.1:37854). Dec 12 17:43:46.401786 systemd-logind[1510]: Removed session 11. Dec 12 17:43:46.455238 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 37854 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:43:46.457012 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:46.461679 systemd-logind[1510]: New session 12 of user core. Dec 12 17:43:46.467630 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:43:46.577800 sshd[4089]: Connection closed by 10.0.0.1 port 37854 Dec 12 17:43:46.578324 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:46.582035 systemd[1]: sshd@11-10.0.0.114:22-10.0.0.1:37854.service: Deactivated successfully. Dec 12 17:43:46.583686 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:43:46.584311 systemd-logind[1510]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:43:46.585708 systemd-logind[1510]: Removed session 12. Dec 12 17:43:51.595732 systemd[1]: Started sshd@12-10.0.0.114:22-10.0.0.1:60980.service - OpenSSH per-connection server daemon (10.0.0.1:60980). Dec 12 17:43:51.658761 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 60980 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:43:51.660045 sshd-session[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:51.664383 systemd-logind[1510]: New session 13 of user core. Dec 12 17:43:51.671673 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:43:51.782514 sshd[4106]: Connection closed by 10.0.0.1 port 60980 Dec 12 17:43:51.782491 sshd-session[4103]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:51.785857 systemd-logind[1510]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:43:51.786339 systemd[1]: sshd@12-10.0.0.114:22-10.0.0.1:60980.service: Deactivated successfully. Dec 12 17:43:51.788144 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:43:51.790240 systemd-logind[1510]: Removed session 13. Dec 12 17:43:56.802632 systemd[1]: Started sshd@13-10.0.0.114:22-10.0.0.1:32782.service - OpenSSH per-connection server daemon (10.0.0.1:32782). Dec 12 17:43:56.874482 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 32782 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:43:56.876155 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:56.880835 systemd-logind[1510]: New session 14 of user core. Dec 12 17:43:56.891710 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:43:57.010277 sshd[4122]: Connection closed by 10.0.0.1 port 32782 Dec 12 17:43:57.010787 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:57.022938 systemd[1]: sshd@13-10.0.0.114:22-10.0.0.1:32782.service: Deactivated successfully. Dec 12 17:43:57.024868 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:43:57.025749 systemd-logind[1510]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:43:57.028996 systemd[1]: Started sshd@14-10.0.0.114:22-10.0.0.1:32796.service - OpenSSH per-connection server daemon (10.0.0.1:32796). Dec 12 17:43:57.029986 systemd-logind[1510]: Removed session 14. Dec 12 17:43:57.092156 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 32796 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:43:57.093645 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:57.097897 systemd-logind[1510]: New session 15 of user core. Dec 12 17:43:57.113718 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:43:57.302113 sshd[4139]: Connection closed by 10.0.0.1 port 32796 Dec 12 17:43:57.302675 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:57.317175 systemd[1]: sshd@14-10.0.0.114:22-10.0.0.1:32796.service: Deactivated successfully. Dec 12 17:43:57.319103 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:43:57.319922 systemd-logind[1510]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:43:57.322355 systemd[1]: Started sshd@15-10.0.0.114:22-10.0.0.1:32804.service - OpenSSH per-connection server daemon (10.0.0.1:32804). Dec 12 17:43:57.323135 systemd-logind[1510]: Removed session 15. Dec 12 17:43:57.382437 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 32804 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:43:57.383823 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:57.388558 systemd-logind[1510]: New session 16 of user core. Dec 12 17:43:57.405713 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:43:57.920864 sshd[4154]: Connection closed by 10.0.0.1 port 32804 Dec 12 17:43:57.921197 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:57.932081 systemd[1]: sshd@15-10.0.0.114:22-10.0.0.1:32804.service: Deactivated successfully. Dec 12 17:43:57.937846 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:43:57.940495 systemd-logind[1510]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:43:57.943020 systemd[1]: Started sshd@16-10.0.0.114:22-10.0.0.1:32830.service - OpenSSH per-connection server daemon (10.0.0.1:32830). Dec 12 17:43:57.945117 systemd-logind[1510]: Removed session 16. Dec 12 17:43:58.011414 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 32830 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:43:58.012752 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:58.017109 systemd-logind[1510]: New session 17 of user core. Dec 12 17:43:58.032681 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:43:58.250423 sshd[4176]: Connection closed by 10.0.0.1 port 32830 Dec 12 17:43:58.251183 sshd-session[4173]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:58.260103 systemd[1]: sshd@16-10.0.0.114:22-10.0.0.1:32830.service: Deactivated successfully. Dec 12 17:43:58.265073 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:43:58.267069 systemd-logind[1510]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:43:58.269941 systemd[1]: Started sshd@17-10.0.0.114:22-10.0.0.1:32832.service - OpenSSH per-connection server daemon (10.0.0.1:32832). Dec 12 17:43:58.270541 systemd-logind[1510]: Removed session 17. Dec 12 17:43:58.327990 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 32832 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:43:58.329433 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:58.334193 systemd-logind[1510]: New session 18 of user core. Dec 12 17:43:58.338675 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:43:58.450618 sshd[4190]: Connection closed by 10.0.0.1 port 32832 Dec 12 17:43:58.450960 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:58.454673 systemd[1]: sshd@17-10.0.0.114:22-10.0.0.1:32832.service: Deactivated successfully. Dec 12 17:43:58.456813 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:43:58.458533 systemd-logind[1510]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:43:58.459575 systemd-logind[1510]: Removed session 18. Dec 12 17:44:03.464842 systemd[1]: Started sshd@18-10.0.0.114:22-10.0.0.1:32852.service - OpenSSH per-connection server daemon (10.0.0.1:32852). Dec 12 17:44:03.521109 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 32852 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:44:03.525099 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:44:03.529342 systemd-logind[1510]: New session 19 of user core. Dec 12 17:44:03.538653 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:44:03.654494 sshd[4209]: Connection closed by 10.0.0.1 port 32852 Dec 12 17:44:03.655987 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Dec 12 17:44:03.659374 systemd[1]: sshd@18-10.0.0.114:22-10.0.0.1:32852.service: Deactivated successfully. Dec 12 17:44:03.661089 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:44:03.661986 systemd-logind[1510]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:44:03.663168 systemd-logind[1510]: Removed session 19. Dec 12 17:44:08.670620 systemd[1]: Started sshd@19-10.0.0.114:22-10.0.0.1:32864.service - OpenSSH per-connection server daemon (10.0.0.1:32864). Dec 12 17:44:08.723338 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 32864 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:44:08.724636 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:44:08.728325 systemd-logind[1510]: New session 20 of user core. Dec 12 17:44:08.739804 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:44:08.848172 sshd[4227]: Connection closed by 10.0.0.1 port 32864 Dec 12 17:44:08.848535 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Dec 12 17:44:08.851784 systemd[1]: sshd@19-10.0.0.114:22-10.0.0.1:32864.service: Deactivated successfully. Dec 12 17:44:08.853810 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:44:08.854671 systemd-logind[1510]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:44:08.855613 systemd-logind[1510]: Removed session 20. Dec 12 17:44:13.864949 systemd[1]: Started sshd@20-10.0.0.114:22-10.0.0.1:47720.service - OpenSSH per-connection server daemon (10.0.0.1:47720). Dec 12 17:44:13.911773 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 47720 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:44:13.913186 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:44:13.917331 systemd-logind[1510]: New session 21 of user core. Dec 12 17:44:13.930753 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 17:44:14.049631 sshd[4247]: Connection closed by 10.0.0.1 port 47720 Dec 12 17:44:14.050157 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Dec 12 17:44:14.061030 systemd[1]: sshd@20-10.0.0.114:22-10.0.0.1:47720.service: Deactivated successfully. Dec 12 17:44:14.064173 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 17:44:14.065047 systemd-logind[1510]: Session 21 logged out. Waiting for processes to exit. Dec 12 17:44:14.068491 systemd[1]: Started sshd@21-10.0.0.114:22-10.0.0.1:47734.service - OpenSSH per-connection server daemon (10.0.0.1:47734). Dec 12 17:44:14.069274 systemd-logind[1510]: Removed session 21. Dec 12 17:44:14.124657 sshd[4261]: Accepted publickey for core from 10.0.0.1 port 47734 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:44:14.126435 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:44:14.130878 systemd-logind[1510]: New session 22 of user core. Dec 12 17:44:14.146710 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 17:44:15.736107 containerd[1526]: time="2025-12-12T17:44:15.736064054Z" level=info msg="StopContainer for \"d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb\" with timeout 30 (s)" Dec 12 17:44:15.736805 containerd[1526]: time="2025-12-12T17:44:15.736776180Z" level=info msg="Stop container \"d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb\" with signal terminated" Dec 12 17:44:15.752794 systemd[1]: cri-containerd-d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb.scope: Deactivated successfully. Dec 12 17:44:15.754858 containerd[1526]: time="2025-12-12T17:44:15.754189538Z" level=info msg="received container exit event container_id:\"d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb\" id:\"d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb\" pid:3238 exited_at:{seconds:1765561455 nanos:753868971}" Dec 12 17:44:15.770788 containerd[1526]: time="2025-12-12T17:44:15.770728785Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:44:15.773825 containerd[1526]: time="2025-12-12T17:44:15.773796068Z" level=info msg="StopContainer for \"7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da\" with timeout 2 (s)" Dec 12 17:44:15.774178 containerd[1526]: time="2025-12-12T17:44:15.774040963Z" level=info msg="Stop container \"7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da\" with signal terminated" Dec 12 17:44:15.778728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb-rootfs.mount: Deactivated successfully. Dec 12 17:44:15.782742 systemd-networkd[1434]: lxc_health: Link DOWN Dec 12 17:44:15.782747 systemd-networkd[1434]: lxc_health: Lost carrier Dec 12 17:44:15.790715 containerd[1526]: time="2025-12-12T17:44:15.790670161Z" level=info msg="StopContainer for \"d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb\" returns successfully" Dec 12 17:44:15.791494 containerd[1526]: time="2025-12-12T17:44:15.791385647Z" level=info msg="StopPodSandbox for \"328c54b766c807a12c1667572737c399ee3b82859994271196780828ae2f522a\"" Dec 12 17:44:15.799392 containerd[1526]: time="2025-12-12T17:44:15.799326425Z" level=info msg="Container to stop \"d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:44:15.802023 systemd[1]: cri-containerd-7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da.scope: Deactivated successfully. Dec 12 17:44:15.802335 systemd[1]: cri-containerd-7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da.scope: Consumed 6.454s CPU time, 122.8M memory peak, 136K read from disk, 14.1M written to disk. Dec 12 17:44:15.807093 systemd[1]: cri-containerd-328c54b766c807a12c1667572737c399ee3b82859994271196780828ae2f522a.scope: Deactivated successfully. Dec 12 17:44:15.808134 containerd[1526]: time="2025-12-12T17:44:15.808096517Z" level=info msg="received sandbox exit event container_id:\"328c54b766c807a12c1667572737c399ee3b82859994271196780828ae2f522a\" id:\"328c54b766c807a12c1667572737c399ee3b82859994271196780828ae2f522a\" exit_status:137 exited_at:{seconds:1765561455 nanos:807666202}" monitor_name=podsandbox Dec 12 17:44:15.809552 containerd[1526]: time="2025-12-12T17:44:15.809514090Z" level=info msg="received container exit event container_id:\"7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da\" id:\"7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da\" pid:3325 exited_at:{seconds:1765561455 nanos:809252477}" Dec 12 17:44:15.829440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da-rootfs.mount: Deactivated successfully. Dec 12 17:44:15.838437 containerd[1526]: time="2025-12-12T17:44:15.838403380Z" level=info msg="StopContainer for \"7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da\" returns successfully" Dec 12 17:44:15.839192 containerd[1526]: time="2025-12-12T17:44:15.839167780Z" level=info msg="StopPodSandbox for \"3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25\"" Dec 12 17:44:15.839248 containerd[1526]: time="2025-12-12T17:44:15.839223335Z" level=info msg="Container to stop \"da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:44:15.839248 containerd[1526]: time="2025-12-12T17:44:15.839234853Z" level=info msg="Container to stop \"2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:44:15.839248 containerd[1526]: time="2025-12-12T17:44:15.839243293Z" level=info msg="Container to stop \"7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:44:15.839318 containerd[1526]: time="2025-12-12T17:44:15.839252412Z" level=info msg="Container to stop \"6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:44:15.839318 containerd[1526]: time="2025-12-12T17:44:15.839260211Z" level=info msg="Container to stop \"3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:44:15.843776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-328c54b766c807a12c1667572737c399ee3b82859994271196780828ae2f522a-rootfs.mount: Deactivated successfully. Dec 12 17:44:15.848024 containerd[1526]: time="2025-12-12T17:44:15.847957670Z" level=info msg="shim disconnected" id=328c54b766c807a12c1667572737c399ee3b82859994271196780828ae2f522a namespace=k8s.io Dec 12 17:44:15.848024 containerd[1526]: time="2025-12-12T17:44:15.847985348Z" level=warning msg="cleaning up after shim disconnected" id=328c54b766c807a12c1667572737c399ee3b82859994271196780828ae2f522a namespace=k8s.io Dec 12 17:44:15.848024 containerd[1526]: time="2025-12-12T17:44:15.848017584Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 17:44:15.848363 systemd[1]: cri-containerd-3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25.scope: Deactivated successfully. Dec 12 17:44:15.850486 containerd[1526]: time="2025-12-12T17:44:15.850055933Z" level=info msg="received sandbox exit event container_id:\"3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25\" id:\"3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25\" exit_status:137 exited_at:{seconds:1765561455 nanos:849867033}" monitor_name=podsandbox Dec 12 17:44:15.863045 containerd[1526]: time="2025-12-12T17:44:15.863001073Z" level=info msg="TearDown network for sandbox \"328c54b766c807a12c1667572737c399ee3b82859994271196780828ae2f522a\" successfully" Dec 12 17:44:15.863045 containerd[1526]: time="2025-12-12T17:44:15.863034270Z" level=info msg="StopPodSandbox for \"328c54b766c807a12c1667572737c399ee3b82859994271196780828ae2f522a\" returns successfully" Dec 12 17:44:15.863810 containerd[1526]: time="2025-12-12T17:44:15.863356156Z" level=info msg="received sandbox container exit event sandbox_id:\"328c54b766c807a12c1667572737c399ee3b82859994271196780828ae2f522a\" exit_status:137 exited_at:{seconds:1765561455 nanos:807666202}" monitor_name=criService Dec 12 17:44:15.864551 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-328c54b766c807a12c1667572737c399ee3b82859994271196780828ae2f522a-shm.mount: Deactivated successfully. Dec 12 17:44:15.872024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25-rootfs.mount: Deactivated successfully. Dec 12 17:44:15.880074 containerd[1526]: time="2025-12-12T17:44:15.880037589Z" level=info msg="shim disconnected" id=3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25 namespace=k8s.io Dec 12 17:44:15.880558 containerd[1526]: time="2025-12-12T17:44:15.880503381Z" level=warning msg="cleaning up after shim disconnected" id=3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25 namespace=k8s.io Dec 12 17:44:15.880558 containerd[1526]: time="2025-12-12T17:44:15.880553856Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 17:44:15.894174 containerd[1526]: time="2025-12-12T17:44:15.893881236Z" level=info msg="received sandbox container exit event sandbox_id:\"3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25\" exit_status:137 exited_at:{seconds:1765561455 nanos:849867033}" monitor_name=criService Dec 12 17:44:15.894285 containerd[1526]: time="2025-12-12T17:44:15.894035260Z" level=info msg="TearDown network for sandbox \"3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25\" successfully" Dec 12 17:44:15.894285 containerd[1526]: time="2025-12-12T17:44:15.894250318Z" level=info msg="StopPodSandbox for \"3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25\" returns successfully" Dec 12 17:44:15.898882 kubelet[2663]: I1212 17:44:15.898828 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e797a90-7add-4a3d-a8d7-34cf16809b8f-cilium-config-path\") pod \"4e797a90-7add-4a3d-a8d7-34cf16809b8f\" (UID: \"4e797a90-7add-4a3d-a8d7-34cf16809b8f\") " Dec 12 17:44:15.898882 kubelet[2663]: I1212 17:44:15.898866 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l8bs\" (UniqueName: \"kubernetes.io/projected/4e797a90-7add-4a3d-a8d7-34cf16809b8f-kube-api-access-9l8bs\") pod \"4e797a90-7add-4a3d-a8d7-34cf16809b8f\" (UID: \"4e797a90-7add-4a3d-a8d7-34cf16809b8f\") " Dec 12 17:44:15.904564 kubelet[2663]: I1212 17:44:15.903020 2663 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e797a90-7add-4a3d-a8d7-34cf16809b8f-kube-api-access-9l8bs" (OuterVolumeSpecName: "kube-api-access-9l8bs") pod "4e797a90-7add-4a3d-a8d7-34cf16809b8f" (UID: "4e797a90-7add-4a3d-a8d7-34cf16809b8f"). InnerVolumeSpecName "kube-api-access-9l8bs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:44:15.905540 kubelet[2663]: I1212 17:44:15.905506 2663 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e797a90-7add-4a3d-a8d7-34cf16809b8f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4e797a90-7add-4a3d-a8d7-34cf16809b8f" (UID: "4e797a90-7add-4a3d-a8d7-34cf16809b8f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:44:15.979483 kubelet[2663]: I1212 17:44:15.979216 2663 scope.go:117] "RemoveContainer" containerID="d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb" Dec 12 17:44:15.981789 containerd[1526]: time="2025-12-12T17:44:15.981662349Z" level=info msg="RemoveContainer for \"d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb\"" Dec 12 17:44:15.984472 systemd[1]: Removed slice kubepods-besteffort-pod4e797a90_7add_4a3d_a8d7_34cf16809b8f.slice - libcontainer container kubepods-besteffort-pod4e797a90_7add_4a3d_a8d7_34cf16809b8f.slice. Dec 12 17:44:15.987772 containerd[1526]: time="2025-12-12T17:44:15.986980078Z" level=info msg="RemoveContainer for \"d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb\" returns successfully" Dec 12 17:44:15.990658 kubelet[2663]: I1212 17:44:15.989823 2663 scope.go:117] "RemoveContainer" containerID="d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb" Dec 12 17:44:15.990658 kubelet[2663]: E1212 17:44:15.990268 2663 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb\": not found" containerID="d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb" Dec 12 17:44:15.990786 containerd[1526]: time="2025-12-12T17:44:15.990100995Z" level=error msg="ContainerStatus for \"d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb\": not found" Dec 12 17:44:15.995366 kubelet[2663]: I1212 17:44:15.995251 2663 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb"} err="failed to get container status \"d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"d14c74fb4b912a089b00b60ca43339008a266e9e749c9203d5a28db779ad51eb\": not found" Dec 12 17:44:15.995366 kubelet[2663]: I1212 17:44:15.995366 2663 scope.go:117] "RemoveContainer" containerID="7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da" Dec 12 17:44:15.998306 containerd[1526]: time="2025-12-12T17:44:15.998278309Z" level=info msg="RemoveContainer for \"7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da\"" Dec 12 17:44:16.000414 kubelet[2663]: I1212 17:44:15.999721 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-host-proc-sys-kernel\") pod \"570756bd-fcd7-432f-a194-78279a547fff\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " Dec 12 17:44:16.000414 kubelet[2663]: I1212 17:44:15.999795 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-bpf-maps\") pod \"570756bd-fcd7-432f-a194-78279a547fff\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " Dec 12 17:44:16.000414 kubelet[2663]: I1212 17:44:15.999816 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-hostproc\") pod \"570756bd-fcd7-432f-a194-78279a547fff\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " Dec 12 17:44:16.000414 kubelet[2663]: I1212 17:44:15.999835 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/570756bd-fcd7-432f-a194-78279a547fff-hubble-tls\") pod \"570756bd-fcd7-432f-a194-78279a547fff\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " Dec 12 17:44:16.000414 kubelet[2663]: I1212 17:44:15.999854 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/570756bd-fcd7-432f-a194-78279a547fff-clustermesh-secrets\") pod \"570756bd-fcd7-432f-a194-78279a547fff\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " Dec 12 17:44:16.000414 kubelet[2663]: I1212 17:44:15.999868 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-cilium-run\") pod \"570756bd-fcd7-432f-a194-78279a547fff\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " Dec 12 17:44:16.000692 kubelet[2663]: I1212 17:44:15.999884 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lp7w\" (UniqueName: \"kubernetes.io/projected/570756bd-fcd7-432f-a194-78279a547fff-kube-api-access-4lp7w\") pod \"570756bd-fcd7-432f-a194-78279a547fff\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " Dec 12 17:44:16.000692 kubelet[2663]: I1212 17:44:15.999901 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-etc-cni-netd\") pod \"570756bd-fcd7-432f-a194-78279a547fff\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " Dec 12 17:44:16.000692 kubelet[2663]: I1212 17:44:15.999917 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/570756bd-fcd7-432f-a194-78279a547fff-cilium-config-path\") pod \"570756bd-fcd7-432f-a194-78279a547fff\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " Dec 12 17:44:16.000692 kubelet[2663]: I1212 17:44:15.999931 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-cni-path\") pod \"570756bd-fcd7-432f-a194-78279a547fff\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " Dec 12 17:44:16.000692 kubelet[2663]: I1212 17:44:15.999946 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-xtables-lock\") pod \"570756bd-fcd7-432f-a194-78279a547fff\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " Dec 12 17:44:16.000692 kubelet[2663]: I1212 17:44:15.999963 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-cilium-cgroup\") pod \"570756bd-fcd7-432f-a194-78279a547fff\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " Dec 12 17:44:16.000822 kubelet[2663]: I1212 17:44:15.999976 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-host-proc-sys-net\") pod \"570756bd-fcd7-432f-a194-78279a547fff\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " Dec 12 17:44:16.000822 kubelet[2663]: I1212 17:44:15.999992 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-lib-modules\") pod \"570756bd-fcd7-432f-a194-78279a547fff\" (UID: \"570756bd-fcd7-432f-a194-78279a547fff\") " Dec 12 17:44:16.000822 kubelet[2663]: I1212 17:44:16.000025 2663 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9l8bs\" (UniqueName: \"kubernetes.io/projected/4e797a90-7add-4a3d-a8d7-34cf16809b8f-kube-api-access-9l8bs\") on node \"localhost\" DevicePath \"\"" Dec 12 17:44:16.000822 kubelet[2663]: I1212 17:44:16.000036 2663 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e797a90-7add-4a3d-a8d7-34cf16809b8f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 12 17:44:16.000822 kubelet[2663]: I1212 17:44:16.000081 2663 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "570756bd-fcd7-432f-a194-78279a547fff" (UID: "570756bd-fcd7-432f-a194-78279a547fff"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:44:16.000822 kubelet[2663]: I1212 17:44:16.000112 2663 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "570756bd-fcd7-432f-a194-78279a547fff" (UID: "570756bd-fcd7-432f-a194-78279a547fff"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:44:16.001014 kubelet[2663]: I1212 17:44:16.000126 2663 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "570756bd-fcd7-432f-a194-78279a547fff" (UID: "570756bd-fcd7-432f-a194-78279a547fff"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:44:16.001014 kubelet[2663]: I1212 17:44:16.000140 2663 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-hostproc" (OuterVolumeSpecName: "hostproc") pod "570756bd-fcd7-432f-a194-78279a547fff" (UID: "570756bd-fcd7-432f-a194-78279a547fff"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:44:16.001014 kubelet[2663]: I1212 17:44:16.000609 2663 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "570756bd-fcd7-432f-a194-78279a547fff" (UID: "570756bd-fcd7-432f-a194-78279a547fff"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:44:16.001014 kubelet[2663]: I1212 17:44:16.000645 2663 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-cni-path" (OuterVolumeSpecName: "cni-path") pod "570756bd-fcd7-432f-a194-78279a547fff" (UID: "570756bd-fcd7-432f-a194-78279a547fff"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:44:16.001014 kubelet[2663]: I1212 17:44:16.000662 2663 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "570756bd-fcd7-432f-a194-78279a547fff" (UID: "570756bd-fcd7-432f-a194-78279a547fff"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:44:16.001155 kubelet[2663]: I1212 17:44:16.000701 2663 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "570756bd-fcd7-432f-a194-78279a547fff" (UID: "570756bd-fcd7-432f-a194-78279a547fff"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:44:16.001155 kubelet[2663]: I1212 17:44:16.000715 2663 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "570756bd-fcd7-432f-a194-78279a547fff" (UID: "570756bd-fcd7-432f-a194-78279a547fff"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:44:16.003039 kubelet[2663]: I1212 17:44:16.002998 2663 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/570756bd-fcd7-432f-a194-78279a547fff-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "570756bd-fcd7-432f-a194-78279a547fff" (UID: "570756bd-fcd7-432f-a194-78279a547fff"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:44:16.003104 kubelet[2663]: I1212 17:44:16.003052 2663 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "570756bd-fcd7-432f-a194-78279a547fff" (UID: "570756bd-fcd7-432f-a194-78279a547fff"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:44:16.004608 kubelet[2663]: I1212 17:44:16.004579 2663 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/570756bd-fcd7-432f-a194-78279a547fff-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "570756bd-fcd7-432f-a194-78279a547fff" (UID: "570756bd-fcd7-432f-a194-78279a547fff"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:44:16.004722 kubelet[2663]: I1212 17:44:16.004577 2663 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/570756bd-fcd7-432f-a194-78279a547fff-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "570756bd-fcd7-432f-a194-78279a547fff" (UID: "570756bd-fcd7-432f-a194-78279a547fff"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:44:16.004967 containerd[1526]: time="2025-12-12T17:44:16.004928484Z" level=info msg="RemoveContainer for \"7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da\" returns successfully" Dec 12 17:44:16.005133 kubelet[2663]: I1212 17:44:16.005111 2663 scope.go:117] "RemoveContainer" containerID="2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d" Dec 12 17:44:16.005580 kubelet[2663]: I1212 17:44:16.005541 2663 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/570756bd-fcd7-432f-a194-78279a547fff-kube-api-access-4lp7w" (OuterVolumeSpecName: "kube-api-access-4lp7w") pod "570756bd-fcd7-432f-a194-78279a547fff" (UID: "570756bd-fcd7-432f-a194-78279a547fff"). InnerVolumeSpecName "kube-api-access-4lp7w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:44:16.008306 containerd[1526]: time="2025-12-12T17:44:16.008270076Z" level=info msg="RemoveContainer for \"2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d\"" Dec 12 17:44:16.022702 containerd[1526]: time="2025-12-12T17:44:16.022621426Z" level=info msg="RemoveContainer for \"2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d\" returns successfully" Dec 12 17:44:16.023104 kubelet[2663]: I1212 17:44:16.022969 2663 scope.go:117] "RemoveContainer" containerID="da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5" Dec 12 17:44:16.030489 containerd[1526]: time="2025-12-12T17:44:16.028614197Z" level=info msg="RemoveContainer for \"da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5\"" Dec 12 17:44:16.037491 containerd[1526]: time="2025-12-12T17:44:16.035730818Z" level=info msg="RemoveContainer for \"da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5\" returns successfully" Dec 12 17:44:16.037934 kubelet[2663]: I1212 17:44:16.037895 2663 scope.go:117] "RemoveContainer" containerID="3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53" Dec 12 17:44:16.048511 containerd[1526]: time="2025-12-12T17:44:16.047482544Z" level=info msg="RemoveContainer for \"3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53\"" Dec 12 17:44:16.053594 containerd[1526]: time="2025-12-12T17:44:16.053408402Z" level=info msg="RemoveContainer for \"3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53\" returns successfully" Dec 12 17:44:16.055498 kubelet[2663]: I1212 17:44:16.053842 2663 scope.go:117] "RemoveContainer" containerID="6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a" Dec 12 17:44:16.056215 containerd[1526]: time="2025-12-12T17:44:16.056166131Z" level=info msg="RemoveContainer for \"6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a\"" Dec 12 17:44:16.059575 containerd[1526]: time="2025-12-12T17:44:16.059540359Z" level=info msg="RemoveContainer for \"6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a\" returns successfully" Dec 12 17:44:16.059744 kubelet[2663]: I1212 17:44:16.059716 2663 scope.go:117] "RemoveContainer" containerID="7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da" Dec 12 17:44:16.059955 containerd[1526]: time="2025-12-12T17:44:16.059899964Z" level=error msg="ContainerStatus for \"7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da\": not found" Dec 12 17:44:16.060069 kubelet[2663]: E1212 17:44:16.060047 2663 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da\": not found" containerID="7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da" Dec 12 17:44:16.060102 kubelet[2663]: I1212 17:44:16.060077 2663 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da"} err="failed to get container status \"7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c39f7aab49ce2d7177e8ad8a5d1a52d7f11c42614d15e413c15b046ff5ad7da\": not found" Dec 12 17:44:16.060102 kubelet[2663]: I1212 17:44:16.060099 2663 scope.go:117] "RemoveContainer" containerID="2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d" Dec 12 17:44:16.060281 containerd[1526]: time="2025-12-12T17:44:16.060252169Z" level=error msg="ContainerStatus for \"2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d\": not found" Dec 12 17:44:16.060405 kubelet[2663]: E1212 17:44:16.060380 2663 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d\": not found" containerID="2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d" Dec 12 17:44:16.060453 kubelet[2663]: I1212 17:44:16.060436 2663 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d"} err="failed to get container status \"2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d\": rpc error: code = NotFound desc = an error occurred when try to find container \"2caf456532940f6fd86818c1e9ae32b9087f3d5011b7e55428846a7d6800bc1d\": not found" Dec 12 17:44:16.060498 kubelet[2663]: I1212 17:44:16.060454 2663 scope.go:117] "RemoveContainer" containerID="da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5" Dec 12 17:44:16.060612 containerd[1526]: time="2025-12-12T17:44:16.060587576Z" level=error msg="ContainerStatus for \"da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5\": not found" Dec 12 17:44:16.060784 kubelet[2663]: E1212 17:44:16.060760 2663 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5\": not found" containerID="da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5" Dec 12 17:44:16.060815 kubelet[2663]: I1212 17:44:16.060790 2663 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5"} err="failed to get container status \"da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"da898418a6cfbf320dbb71b47f323eb033a70651cac70ecb7d1e0fddb63aa2f5\": not found" Dec 12 17:44:16.060973 kubelet[2663]: I1212 17:44:16.060806 2663 scope.go:117] "RemoveContainer" containerID="3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53" Dec 12 17:44:16.061005 containerd[1526]: time="2025-12-12T17:44:16.060955620Z" level=error msg="ContainerStatus for \"3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53\": not found" Dec 12 17:44:16.061088 kubelet[2663]: E1212 17:44:16.061044 2663 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53\": not found" containerID="3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53" Dec 12 17:44:16.061088 kubelet[2663]: I1212 17:44:16.061067 2663 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53"} err="failed to get container status \"3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53\": rpc error: code = NotFound desc = an error occurred when try to find container \"3520923a5a4e198cd73d299a73859505452af7e5654f79d8e0c48385cc27df53\": not found" Dec 12 17:44:16.061088 kubelet[2663]: I1212 17:44:16.061083 2663 scope.go:117] "RemoveContainer" containerID="6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a" Dec 12 17:44:16.061247 containerd[1526]: time="2025-12-12T17:44:16.061200036Z" level=error msg="ContainerStatus for \"6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a\": not found" Dec 12 17:44:16.061323 kubelet[2663]: E1212 17:44:16.061305 2663 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a\": not found" containerID="6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a" Dec 12 17:44:16.061375 kubelet[2663]: I1212 17:44:16.061359 2663 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a"} err="failed to get container status \"6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e2f5e842ac535100459b1069fa73d8eb60ebfca024cf5d21935f5a84d30f78a\": not found" Dec 12 17:44:16.100835 kubelet[2663]: I1212 17:44:16.100753 2663 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 12 17:44:16.100835 kubelet[2663]: I1212 17:44:16.100794 2663 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 12 17:44:16.100835 kubelet[2663]: I1212 17:44:16.100804 2663 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/570756bd-fcd7-432f-a194-78279a547fff-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 12 17:44:16.100835 kubelet[2663]: I1212 17:44:16.100813 2663 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 12 17:44:16.100835 kubelet[2663]: I1212 17:44:16.100828 2663 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 12 17:44:16.100835 kubelet[2663]: I1212 17:44:16.100836 2663 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 12 17:44:16.100835 kubelet[2663]: I1212 17:44:16.100844 2663 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 12 17:44:16.100835 kubelet[2663]: I1212 17:44:16.100853 2663 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 12 17:44:16.101141 kubelet[2663]: I1212 17:44:16.100862 2663 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 12 17:44:16.101141 kubelet[2663]: I1212 17:44:16.100871 2663 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 12 17:44:16.101141 kubelet[2663]: I1212 17:44:16.100879 2663 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/570756bd-fcd7-432f-a194-78279a547fff-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 12 17:44:16.101141 kubelet[2663]: I1212 17:44:16.100887 2663 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/570756bd-fcd7-432f-a194-78279a547fff-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 12 17:44:16.101141 kubelet[2663]: I1212 17:44:16.100894 2663 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4lp7w\" (UniqueName: \"kubernetes.io/projected/570756bd-fcd7-432f-a194-78279a547fff-kube-api-access-4lp7w\") on node \"localhost\" DevicePath \"\"" Dec 12 17:44:16.101141 kubelet[2663]: I1212 17:44:16.100902 2663 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/570756bd-fcd7-432f-a194-78279a547fff-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 12 17:44:16.294403 systemd[1]: Removed slice kubepods-burstable-pod570756bd_fcd7_432f_a194_78279a547fff.slice - libcontainer container kubepods-burstable-pod570756bd_fcd7_432f_a194_78279a547fff.slice. Dec 12 17:44:16.294520 systemd[1]: kubepods-burstable-pod570756bd_fcd7_432f_a194_78279a547fff.slice: Consumed 6.567s CPU time, 123.1M memory peak, 2.2M read from disk, 14.2M written to disk. Dec 12 17:44:16.766794 kubelet[2663]: I1212 17:44:16.766751 2663 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e797a90-7add-4a3d-a8d7-34cf16809b8f" path="/var/lib/kubelet/pods/4e797a90-7add-4a3d-a8d7-34cf16809b8f/volumes" Dec 12 17:44:16.767151 kubelet[2663]: I1212 17:44:16.767133 2663 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="570756bd-fcd7-432f-a194-78279a547fff" path="/var/lib/kubelet/pods/570756bd-fcd7-432f-a194-78279a547fff/volumes" Dec 12 17:44:16.778578 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a3014205f877254b4771a674b12b5229df0f332cd483f6da1cd2c4bd6491d25-shm.mount: Deactivated successfully. Dec 12 17:44:16.778683 systemd[1]: var-lib-kubelet-pods-4e797a90\x2d7add\x2d4a3d\x2da8d7\x2d34cf16809b8f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9l8bs.mount: Deactivated successfully. Dec 12 17:44:16.778745 systemd[1]: var-lib-kubelet-pods-570756bd\x2dfcd7\x2d432f\x2da194\x2d78279a547fff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4lp7w.mount: Deactivated successfully. Dec 12 17:44:16.778797 systemd[1]: var-lib-kubelet-pods-570756bd\x2dfcd7\x2d432f\x2da194\x2d78279a547fff-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 12 17:44:16.778845 systemd[1]: var-lib-kubelet-pods-570756bd\x2dfcd7\x2d432f\x2da194\x2d78279a547fff-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 12 17:44:16.832324 kubelet[2663]: E1212 17:44:16.832280 2663 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 12 17:44:17.691092 sshd[4264]: Connection closed by 10.0.0.1 port 47734 Dec 12 17:44:17.691376 sshd-session[4261]: pam_unix(sshd:session): session closed for user core Dec 12 17:44:17.703687 systemd[1]: sshd@21-10.0.0.114:22-10.0.0.1:47734.service: Deactivated successfully. Dec 12 17:44:17.705701 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 17:44:17.706950 systemd-logind[1510]: Session 22 logged out. Waiting for processes to exit. Dec 12 17:44:17.711709 systemd[1]: Started sshd@22-10.0.0.114:22-10.0.0.1:47738.service - OpenSSH per-connection server daemon (10.0.0.1:47738). Dec 12 17:44:17.712799 systemd-logind[1510]: Removed session 22. Dec 12 17:44:17.765304 sshd[4409]: Accepted publickey for core from 10.0.0.1 port 47738 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:44:17.767332 sshd-session[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:44:17.772560 systemd-logind[1510]: New session 23 of user core. Dec 12 17:44:17.781618 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 17:44:18.240673 kubelet[2663]: I1212 17:44:18.240616 2663 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T17:44:18Z","lastTransitionTime":"2025-12-12T17:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 12 17:44:19.138518 sshd[4412]: Connection closed by 10.0.0.1 port 47738 Dec 12 17:44:19.138752 sshd-session[4409]: pam_unix(sshd:session): session closed for user core Dec 12 17:44:19.151155 systemd[1]: sshd@22-10.0.0.114:22-10.0.0.1:47738.service: Deactivated successfully. Dec 12 17:44:19.155841 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 17:44:19.156051 systemd[1]: session-23.scope: Consumed 1.246s CPU time, 24M memory peak. Dec 12 17:44:19.157459 systemd-logind[1510]: Session 23 logged out. Waiting for processes to exit. Dec 12 17:44:19.159136 systemd[1]: Started sshd@23-10.0.0.114:22-10.0.0.1:47746.service - OpenSSH per-connection server daemon (10.0.0.1:47746). Dec 12 17:44:19.161965 systemd-logind[1510]: Removed session 23. Dec 12 17:44:19.169879 kubelet[2663]: I1212 17:44:19.169831 2663 memory_manager.go:355] "RemoveStaleState removing state" podUID="570756bd-fcd7-432f-a194-78279a547fff" containerName="cilium-agent" Dec 12 17:44:19.169879 kubelet[2663]: I1212 17:44:19.169869 2663 memory_manager.go:355] "RemoveStaleState removing state" podUID="4e797a90-7add-4a3d-a8d7-34cf16809b8f" containerName="cilium-operator" Dec 12 17:44:19.181127 systemd[1]: Created slice kubepods-burstable-podde5c912e_27e9_490c_a0c7_32e02b08d29c.slice - libcontainer container kubepods-burstable-podde5c912e_27e9_490c_a0c7_32e02b08d29c.slice. Dec 12 17:44:19.222050 kubelet[2663]: I1212 17:44:19.221967 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de5c912e-27e9-490c-a0c7-32e02b08d29c-lib-modules\") pod \"cilium-4qnxs\" (UID: \"de5c912e-27e9-490c-a0c7-32e02b08d29c\") " pod="kube-system/cilium-4qnxs" Dec 12 17:44:19.222184 kubelet[2663]: I1212 17:44:19.222071 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de5c912e-27e9-490c-a0c7-32e02b08d29c-clustermesh-secrets\") pod \"cilium-4qnxs\" (UID: \"de5c912e-27e9-490c-a0c7-32e02b08d29c\") " pod="kube-system/cilium-4qnxs" Dec 12 17:44:19.222184 kubelet[2663]: I1212 17:44:19.222101 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de5c912e-27e9-490c-a0c7-32e02b08d29c-cni-path\") pod \"cilium-4qnxs\" (UID: \"de5c912e-27e9-490c-a0c7-32e02b08d29c\") " pod="kube-system/cilium-4qnxs" Dec 12 17:44:19.222184 kubelet[2663]: I1212 17:44:19.222121 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de5c912e-27e9-490c-a0c7-32e02b08d29c-cilium-config-path\") pod \"cilium-4qnxs\" (UID: \"de5c912e-27e9-490c-a0c7-32e02b08d29c\") " pod="kube-system/cilium-4qnxs" Dec 12 17:44:19.222184 kubelet[2663]: I1212 17:44:19.222140 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de5c912e-27e9-490c-a0c7-32e02b08d29c-hubble-tls\") pod \"cilium-4qnxs\" (UID: \"de5c912e-27e9-490c-a0c7-32e02b08d29c\") " pod="kube-system/cilium-4qnxs" Dec 12 17:44:19.222184 kubelet[2663]: I1212 17:44:19.222160 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de5c912e-27e9-490c-a0c7-32e02b08d29c-cilium-run\") pod \"cilium-4qnxs\" (UID: \"de5c912e-27e9-490c-a0c7-32e02b08d29c\") " pod="kube-system/cilium-4qnxs" Dec 12 17:44:19.222184 kubelet[2663]: I1212 17:44:19.222176 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de5c912e-27e9-490c-a0c7-32e02b08d29c-host-proc-sys-net\") pod \"cilium-4qnxs\" (UID: \"de5c912e-27e9-490c-a0c7-32e02b08d29c\") " pod="kube-system/cilium-4qnxs" Dec 12 17:44:19.222317 kubelet[2663]: I1212 17:44:19.222191 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de5c912e-27e9-490c-a0c7-32e02b08d29c-hostproc\") pod \"cilium-4qnxs\" (UID: \"de5c912e-27e9-490c-a0c7-32e02b08d29c\") " pod="kube-system/cilium-4qnxs" Dec 12 17:44:19.222317 kubelet[2663]: I1212 17:44:19.222208 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/de5c912e-27e9-490c-a0c7-32e02b08d29c-cilium-ipsec-secrets\") pod \"cilium-4qnxs\" (UID: \"de5c912e-27e9-490c-a0c7-32e02b08d29c\") " pod="kube-system/cilium-4qnxs" Dec 12 17:44:19.222317 kubelet[2663]: I1212 17:44:19.222239 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de5c912e-27e9-490c-a0c7-32e02b08d29c-bpf-maps\") pod \"cilium-4qnxs\" (UID: \"de5c912e-27e9-490c-a0c7-32e02b08d29c\") " pod="kube-system/cilium-4qnxs" Dec 12 17:44:19.222317 kubelet[2663]: I1212 17:44:19.222268 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de5c912e-27e9-490c-a0c7-32e02b08d29c-cilium-cgroup\") pod \"cilium-4qnxs\" (UID: \"de5c912e-27e9-490c-a0c7-32e02b08d29c\") " pod="kube-system/cilium-4qnxs" Dec 12 17:44:19.222317 kubelet[2663]: I1212 17:44:19.222286 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de5c912e-27e9-490c-a0c7-32e02b08d29c-xtables-lock\") pod \"cilium-4qnxs\" (UID: \"de5c912e-27e9-490c-a0c7-32e02b08d29c\") " pod="kube-system/cilium-4qnxs" Dec 12 17:44:19.222317 kubelet[2663]: I1212 17:44:19.222302 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnfwv\" (UniqueName: \"kubernetes.io/projected/de5c912e-27e9-490c-a0c7-32e02b08d29c-kube-api-access-qnfwv\") pod \"cilium-4qnxs\" (UID: \"de5c912e-27e9-490c-a0c7-32e02b08d29c\") " pod="kube-system/cilium-4qnxs" Dec 12 17:44:19.222449 kubelet[2663]: I1212 17:44:19.222322 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de5c912e-27e9-490c-a0c7-32e02b08d29c-etc-cni-netd\") pod \"cilium-4qnxs\" (UID: \"de5c912e-27e9-490c-a0c7-32e02b08d29c\") " pod="kube-system/cilium-4qnxs" Dec 12 17:44:19.222449 kubelet[2663]: I1212 17:44:19.222341 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de5c912e-27e9-490c-a0c7-32e02b08d29c-host-proc-sys-kernel\") pod \"cilium-4qnxs\" (UID: \"de5c912e-27e9-490c-a0c7-32e02b08d29c\") " pod="kube-system/cilium-4qnxs" Dec 12 17:44:19.237024 sshd[4424]: Accepted publickey for core from 10.0.0.1 port 47746 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:44:19.238822 sshd-session[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:44:19.242659 systemd-logind[1510]: New session 24 of user core. Dec 12 17:44:19.255692 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 17:44:19.304892 sshd[4427]: Connection closed by 10.0.0.1 port 47746 Dec 12 17:44:19.305378 sshd-session[4424]: pam_unix(sshd:session): session closed for user core Dec 12 17:44:19.321881 systemd[1]: sshd@23-10.0.0.114:22-10.0.0.1:47746.service: Deactivated successfully. Dec 12 17:44:19.330587 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 17:44:19.336451 systemd-logind[1510]: Session 24 logged out. Waiting for processes to exit. Dec 12 17:44:19.341042 systemd[1]: Started sshd@24-10.0.0.114:22-10.0.0.1:47752.service - OpenSSH per-connection server daemon (10.0.0.1:47752). Dec 12 17:44:19.341943 systemd-logind[1510]: Removed session 24. Dec 12 17:44:19.403887 sshd[4438]: Accepted publickey for core from 10.0.0.1 port 47752 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:44:19.409634 sshd-session[4438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:44:19.414604 systemd-logind[1510]: New session 25 of user core. Dec 12 17:44:19.423650 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 17:44:19.487603 containerd[1526]: time="2025-12-12T17:44:19.487550884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4qnxs,Uid:de5c912e-27e9-490c-a0c7-32e02b08d29c,Namespace:kube-system,Attempt:0,}" Dec 12 17:44:19.513971 containerd[1526]: time="2025-12-12T17:44:19.513929085Z" level=info msg="connecting to shim 70c4ef65c8b0c30e3b3495c5cfc43e2995c32bdf1515851e9c323287f3f7906c" address="unix:///run/containerd/s/17e1f9d246ae62bc67cdf5f9ee10f59a37320db0bc88d56726070898e8c4a9a7" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:44:19.540715 systemd[1]: Started cri-containerd-70c4ef65c8b0c30e3b3495c5cfc43e2995c32bdf1515851e9c323287f3f7906c.scope - libcontainer container 70c4ef65c8b0c30e3b3495c5cfc43e2995c32bdf1515851e9c323287f3f7906c. Dec 12 17:44:19.577802 containerd[1526]: time="2025-12-12T17:44:19.577759605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4qnxs,Uid:de5c912e-27e9-490c-a0c7-32e02b08d29c,Namespace:kube-system,Attempt:0,} returns sandbox id \"70c4ef65c8b0c30e3b3495c5cfc43e2995c32bdf1515851e9c323287f3f7906c\"" Dec 12 17:44:19.580438 containerd[1526]: time="2025-12-12T17:44:19.580393506Z" level=info msg="CreateContainer within sandbox \"70c4ef65c8b0c30e3b3495c5cfc43e2995c32bdf1515851e9c323287f3f7906c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 17:44:19.587094 containerd[1526]: time="2025-12-12T17:44:19.587045671Z" level=info msg="Container d9e43e8c38b55704d550a3d2db2a60348f1a7803c3488e705f2013c66687365a: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:44:19.592613 containerd[1526]: time="2025-12-12T17:44:19.592571531Z" level=info msg="CreateContainer within sandbox \"70c4ef65c8b0c30e3b3495c5cfc43e2995c32bdf1515851e9c323287f3f7906c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d9e43e8c38b55704d550a3d2db2a60348f1a7803c3488e705f2013c66687365a\"" Dec 12 17:44:19.593147 containerd[1526]: time="2025-12-12T17:44:19.593121445Z" level=info msg="StartContainer for \"d9e43e8c38b55704d550a3d2db2a60348f1a7803c3488e705f2013c66687365a\"" Dec 12 17:44:19.594239 containerd[1526]: time="2025-12-12T17:44:19.594214794Z" level=info msg="connecting to shim d9e43e8c38b55704d550a3d2db2a60348f1a7803c3488e705f2013c66687365a" address="unix:///run/containerd/s/17e1f9d246ae62bc67cdf5f9ee10f59a37320db0bc88d56726070898e8c4a9a7" protocol=ttrpc version=3 Dec 12 17:44:19.619700 systemd[1]: Started cri-containerd-d9e43e8c38b55704d550a3d2db2a60348f1a7803c3488e705f2013c66687365a.scope - libcontainer container d9e43e8c38b55704d550a3d2db2a60348f1a7803c3488e705f2013c66687365a. Dec 12 17:44:19.647100 containerd[1526]: time="2025-12-12T17:44:19.647064669Z" level=info msg="StartContainer for \"d9e43e8c38b55704d550a3d2db2a60348f1a7803c3488e705f2013c66687365a\" returns successfully" Dec 12 17:44:19.655046 systemd[1]: cri-containerd-d9e43e8c38b55704d550a3d2db2a60348f1a7803c3488e705f2013c66687365a.scope: Deactivated successfully. Dec 12 17:44:19.656224 containerd[1526]: time="2025-12-12T17:44:19.656189749Z" level=info msg="received container exit event container_id:\"d9e43e8c38b55704d550a3d2db2a60348f1a7803c3488e705f2013c66687365a\" id:\"d9e43e8c38b55704d550a3d2db2a60348f1a7803c3488e705f2013c66687365a\" pid:4509 exited_at:{seconds:1765561459 nanos:655952969}" Dec 12 17:44:20.003849 containerd[1526]: time="2025-12-12T17:44:20.003740114Z" level=info msg="CreateContainer within sandbox \"70c4ef65c8b0c30e3b3495c5cfc43e2995c32bdf1515851e9c323287f3f7906c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 17:44:20.013163 containerd[1526]: time="2025-12-12T17:44:20.012503344Z" level=info msg="Container 1c998628f62eb1badfa32aea43d2c94f62db832329f22877032f9d6b2c333792: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:44:20.018176 containerd[1526]: time="2025-12-12T17:44:20.018069106Z" level=info msg="CreateContainer within sandbox \"70c4ef65c8b0c30e3b3495c5cfc43e2995c32bdf1515851e9c323287f3f7906c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1c998628f62eb1badfa32aea43d2c94f62db832329f22877032f9d6b2c333792\"" Dec 12 17:44:20.019817 containerd[1526]: time="2025-12-12T17:44:20.019627543Z" level=info msg="StartContainer for \"1c998628f62eb1badfa32aea43d2c94f62db832329f22877032f9d6b2c333792\"" Dec 12 17:44:20.021006 containerd[1526]: time="2025-12-12T17:44:20.020973838Z" level=info msg="connecting to shim 1c998628f62eb1badfa32aea43d2c94f62db832329f22877032f9d6b2c333792" address="unix:///run/containerd/s/17e1f9d246ae62bc67cdf5f9ee10f59a37320db0bc88d56726070898e8c4a9a7" protocol=ttrpc version=3 Dec 12 17:44:20.047686 systemd[1]: Started cri-containerd-1c998628f62eb1badfa32aea43d2c94f62db832329f22877032f9d6b2c333792.scope - libcontainer container 1c998628f62eb1badfa32aea43d2c94f62db832329f22877032f9d6b2c333792. Dec 12 17:44:20.074097 containerd[1526]: time="2025-12-12T17:44:20.074059660Z" level=info msg="StartContainer for \"1c998628f62eb1badfa32aea43d2c94f62db832329f22877032f9d6b2c333792\" returns successfully" Dec 12 17:44:20.081502 containerd[1526]: time="2025-12-12T17:44:20.081377325Z" level=info msg="received container exit event container_id:\"1c998628f62eb1badfa32aea43d2c94f62db832329f22877032f9d6b2c333792\" id:\"1c998628f62eb1badfa32aea43d2c94f62db832329f22877032f9d6b2c333792\" pid:4553 exited_at:{seconds:1765561460 nanos:81184500}" Dec 12 17:44:20.081541 systemd[1]: cri-containerd-1c998628f62eb1badfa32aea43d2c94f62db832329f22877032f9d6b2c333792.scope: Deactivated successfully. Dec 12 17:44:21.007128 containerd[1526]: time="2025-12-12T17:44:21.007089750Z" level=info msg="CreateContainer within sandbox \"70c4ef65c8b0c30e3b3495c5cfc43e2995c32bdf1515851e9c323287f3f7906c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 17:44:21.017496 containerd[1526]: time="2025-12-12T17:44:21.017314671Z" level=info msg="Container c6183c475d5fd02ad0da82733d8182d20d1c04625fe41c42628896a9a8afe120: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:44:21.035492 containerd[1526]: time="2025-12-12T17:44:21.035296937Z" level=info msg="CreateContainer within sandbox \"70c4ef65c8b0c30e3b3495c5cfc43e2995c32bdf1515851e9c323287f3f7906c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c6183c475d5fd02ad0da82733d8182d20d1c04625fe41c42628896a9a8afe120\"" Dec 12 17:44:21.035859 containerd[1526]: time="2025-12-12T17:44:21.035786741Z" level=info msg="StartContainer for \"c6183c475d5fd02ad0da82733d8182d20d1c04625fe41c42628896a9a8afe120\"" Dec 12 17:44:21.038067 containerd[1526]: time="2025-12-12T17:44:21.038018616Z" level=info msg="connecting to shim c6183c475d5fd02ad0da82733d8182d20d1c04625fe41c42628896a9a8afe120" address="unix:///run/containerd/s/17e1f9d246ae62bc67cdf5f9ee10f59a37320db0bc88d56726070898e8c4a9a7" protocol=ttrpc version=3 Dec 12 17:44:21.062690 systemd[1]: Started cri-containerd-c6183c475d5fd02ad0da82733d8182d20d1c04625fe41c42628896a9a8afe120.scope - libcontainer container c6183c475d5fd02ad0da82733d8182d20d1c04625fe41c42628896a9a8afe120. Dec 12 17:44:21.119210 containerd[1526]: time="2025-12-12T17:44:21.118905336Z" level=info msg="StartContainer for \"c6183c475d5fd02ad0da82733d8182d20d1c04625fe41c42628896a9a8afe120\" returns successfully" Dec 12 17:44:21.119895 systemd[1]: cri-containerd-c6183c475d5fd02ad0da82733d8182d20d1c04625fe41c42628896a9a8afe120.scope: Deactivated successfully. Dec 12 17:44:21.121789 containerd[1526]: time="2025-12-12T17:44:21.121753085Z" level=info msg="received container exit event container_id:\"c6183c475d5fd02ad0da82733d8182d20d1c04625fe41c42628896a9a8afe120\" id:\"c6183c475d5fd02ad0da82733d8182d20d1c04625fe41c42628896a9a8afe120\" pid:4597 exited_at:{seconds:1765561461 nanos:121351834}" Dec 12 17:44:21.145980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6183c475d5fd02ad0da82733d8182d20d1c04625fe41c42628896a9a8afe120-rootfs.mount: Deactivated successfully. Dec 12 17:44:21.834547 kubelet[2663]: E1212 17:44:21.834324 2663 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 12 17:44:22.023704 containerd[1526]: time="2025-12-12T17:44:22.023064004Z" level=info msg="CreateContainer within sandbox \"70c4ef65c8b0c30e3b3495c5cfc43e2995c32bdf1515851e9c323287f3f7906c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 17:44:22.033141 containerd[1526]: time="2025-12-12T17:44:22.033093783Z" level=info msg="Container ef7afe682cc7328fde31a2a2c10f65ff9f2bc9c24a4da9f7b41be5a41635025b: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:44:22.040912 containerd[1526]: time="2025-12-12T17:44:22.040872481Z" level=info msg="CreateContainer within sandbox \"70c4ef65c8b0c30e3b3495c5cfc43e2995c32bdf1515851e9c323287f3f7906c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ef7afe682cc7328fde31a2a2c10f65ff9f2bc9c24a4da9f7b41be5a41635025b\"" Dec 12 17:44:22.041447 containerd[1526]: time="2025-12-12T17:44:22.041351327Z" level=info msg="StartContainer for \"ef7afe682cc7328fde31a2a2c10f65ff9f2bc9c24a4da9f7b41be5a41635025b\"" Dec 12 17:44:22.042551 containerd[1526]: time="2025-12-12T17:44:22.042518886Z" level=info msg="connecting to shim ef7afe682cc7328fde31a2a2c10f65ff9f2bc9c24a4da9f7b41be5a41635025b" address="unix:///run/containerd/s/17e1f9d246ae62bc67cdf5f9ee10f59a37320db0bc88d56726070898e8c4a9a7" protocol=ttrpc version=3 Dec 12 17:44:22.068713 systemd[1]: Started cri-containerd-ef7afe682cc7328fde31a2a2c10f65ff9f2bc9c24a4da9f7b41be5a41635025b.scope - libcontainer container ef7afe682cc7328fde31a2a2c10f65ff9f2bc9c24a4da9f7b41be5a41635025b. Dec 12 17:44:22.111087 systemd[1]: cri-containerd-ef7afe682cc7328fde31a2a2c10f65ff9f2bc9c24a4da9f7b41be5a41635025b.scope: Deactivated successfully. Dec 12 17:44:22.114078 containerd[1526]: time="2025-12-12T17:44:22.114030334Z" level=info msg="received container exit event container_id:\"ef7afe682cc7328fde31a2a2c10f65ff9f2bc9c24a4da9f7b41be5a41635025b\" id:\"ef7afe682cc7328fde31a2a2c10f65ff9f2bc9c24a4da9f7b41be5a41635025b\" pid:4636 exited_at:{seconds:1765561462 nanos:112695427}" Dec 12 17:44:22.125395 containerd[1526]: time="2025-12-12T17:44:22.125337225Z" level=info msg="StartContainer for \"ef7afe682cc7328fde31a2a2c10f65ff9f2bc9c24a4da9f7b41be5a41635025b\" returns successfully" Dec 12 17:44:22.138851 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef7afe682cc7328fde31a2a2c10f65ff9f2bc9c24a4da9f7b41be5a41635025b-rootfs.mount: Deactivated successfully. Dec 12 17:44:23.026674 containerd[1526]: time="2025-12-12T17:44:23.026627208Z" level=info msg="CreateContainer within sandbox \"70c4ef65c8b0c30e3b3495c5cfc43e2995c32bdf1515851e9c323287f3f7906c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 17:44:23.046351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2204369120.mount: Deactivated successfully. Dec 12 17:44:23.048521 containerd[1526]: time="2025-12-12T17:44:23.046656494Z" level=info msg="Container 394a20726bb4f8d044b7dd9f72b2783ac5d4c35292723e52736b28d17c8234e0: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:44:23.058555 containerd[1526]: time="2025-12-12T17:44:23.058513437Z" level=info msg="CreateContainer within sandbox \"70c4ef65c8b0c30e3b3495c5cfc43e2995c32bdf1515851e9c323287f3f7906c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"394a20726bb4f8d044b7dd9f72b2783ac5d4c35292723e52736b28d17c8234e0\"" Dec 12 17:44:23.059025 containerd[1526]: time="2025-12-12T17:44:23.059003125Z" level=info msg="StartContainer for \"394a20726bb4f8d044b7dd9f72b2783ac5d4c35292723e52736b28d17c8234e0\"" Dec 12 17:44:23.059926 containerd[1526]: time="2025-12-12T17:44:23.059863508Z" level=info msg="connecting to shim 394a20726bb4f8d044b7dd9f72b2783ac5d4c35292723e52736b28d17c8234e0" address="unix:///run/containerd/s/17e1f9d246ae62bc67cdf5f9ee10f59a37320db0bc88d56726070898e8c4a9a7" protocol=ttrpc version=3 Dec 12 17:44:23.077644 systemd[1]: Started cri-containerd-394a20726bb4f8d044b7dd9f72b2783ac5d4c35292723e52736b28d17c8234e0.scope - libcontainer container 394a20726bb4f8d044b7dd9f72b2783ac5d4c35292723e52736b28d17c8234e0. Dec 12 17:44:23.115263 containerd[1526]: time="2025-12-12T17:44:23.115219679Z" level=info msg="StartContainer for \"394a20726bb4f8d044b7dd9f72b2783ac5d4c35292723e52736b28d17c8234e0\" returns successfully" Dec 12 17:44:23.397617 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 12 17:44:24.044077 kubelet[2663]: I1212 17:44:24.043989 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4qnxs" podStartSLOduration=5.043963915 podStartE2EDuration="5.043963915s" podCreationTimestamp="2025-12-12 17:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:44:24.04323456 +0000 UTC m=+77.358137697" watchObservedRunningTime="2025-12-12 17:44:24.043963915 +0000 UTC m=+77.358867052" Dec 12 17:44:26.341833 systemd-networkd[1434]: lxc_health: Link UP Dec 12 17:44:26.353897 systemd-networkd[1434]: lxc_health: Gained carrier Dec 12 17:44:27.834636 systemd-networkd[1434]: lxc_health: Gained IPv6LL Dec 12 17:44:32.206255 sshd[4441]: Connection closed by 10.0.0.1 port 47752 Dec 12 17:44:32.206837 sshd-session[4438]: pam_unix(sshd:session): session closed for user core Dec 12 17:44:32.212311 systemd[1]: sshd@24-10.0.0.114:22-10.0.0.1:47752.service: Deactivated successfully. Dec 12 17:44:32.215259 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 17:44:32.216274 systemd-logind[1510]: Session 25 logged out. Waiting for processes to exit. Dec 12 17:44:32.218905 systemd-logind[1510]: Removed session 25.