Dec 12 17:35:54.821019 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 12 17:35:54.821042 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 12 17:35:54.821052 kernel: KASLR enabled Dec 12 17:35:54.821057 kernel: efi: EFI v2.7 by EDK II Dec 12 17:35:54.821063 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Dec 12 17:35:54.821068 kernel: random: crng init done Dec 12 17:35:54.821076 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 12 17:35:54.821081 kernel: secureboot: Secure boot enabled Dec 12 17:35:54.821088 kernel: ACPI: Early table checksum verification disabled Dec 12 17:35:54.821095 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Dec 12 17:35:54.821101 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 12 17:35:54.821107 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:35:54.821112 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:35:54.821118 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:35:54.821125 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:35:54.821139 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:35:54.821145 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:35:54.821151 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:35:54.821160 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:35:54.821166 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:35:54.821172 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 12 17:35:54.821178 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:35:54.821184 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:35:54.821190 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Dec 12 17:35:54.821196 kernel: Zone ranges: Dec 12 17:35:54.821203 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:35:54.821211 kernel: DMA32 empty Dec 12 17:35:54.821216 kernel: Normal empty Dec 12 17:35:54.821222 kernel: Device empty Dec 12 17:35:54.821228 kernel: Movable zone start for each node Dec 12 17:35:54.821234 kernel: Early memory node ranges Dec 12 17:35:54.821262 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Dec 12 17:35:54.821270 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Dec 12 17:35:54.821276 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Dec 12 17:35:54.821282 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Dec 12 17:35:54.821288 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Dec 12 17:35:54.821294 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Dec 12 17:35:54.821303 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Dec 12 17:35:54.821309 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Dec 12 17:35:54.821315 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 12 17:35:54.821324 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:35:54.821331 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 12 17:35:54.821337 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Dec 12 17:35:54.821344 kernel: psci: probing for conduit method from ACPI. Dec 12 17:35:54.821352 kernel: psci: PSCIv1.1 detected in firmware. Dec 12 17:35:54.821358 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:35:54.821365 kernel: psci: Trusted OS migration not required Dec 12 17:35:54.821371 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:35:54.821378 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 12 17:35:54.821384 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:35:54.821391 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:35:54.821397 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 12 17:35:54.821404 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:35:54.821412 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:35:54.821418 kernel: CPU features: detected: Spectre-v4 Dec 12 17:35:54.821425 kernel: CPU features: detected: Spectre-BHB Dec 12 17:35:54.821431 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:35:54.821438 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:35:54.821444 kernel: CPU features: detected: ARM erratum 1418040 Dec 12 17:35:54.821451 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:35:54.821457 kernel: alternatives: applying boot alternatives Dec 12 17:35:54.821465 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:35:54.821471 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:35:54.821478 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:35:54.821486 kernel: Fallback order for Node 0: 0 Dec 12 17:35:54.821492 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 12 17:35:54.821499 kernel: Policy zone: DMA Dec 12 17:35:54.821505 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:35:54.821512 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 12 17:35:54.821518 kernel: software IO TLB: area num 4. Dec 12 17:35:54.821525 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 12 17:35:54.821531 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Dec 12 17:35:54.821538 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 17:35:54.821544 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:35:54.821552 kernel: rcu: RCU event tracing is enabled. Dec 12 17:35:54.821559 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 17:35:54.821574 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:35:54.821582 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:35:54.821589 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:35:54.821595 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 17:35:54.821602 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:35:54.821609 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:35:54.821619 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:35:54.821626 kernel: GICv3: 256 SPIs implemented Dec 12 17:35:54.821632 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:35:54.821639 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:35:54.821645 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 12 17:35:54.821651 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:35:54.821660 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 12 17:35:54.821666 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 12 17:35:54.821673 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:35:54.821680 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:35:54.821687 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 12 17:35:54.821694 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 12 17:35:54.821700 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:35:54.821707 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:35:54.821714 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 12 17:35:54.821720 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 12 17:35:54.821727 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 12 17:35:54.821735 kernel: arm-pv: using stolen time PV Dec 12 17:35:54.821742 kernel: Console: colour dummy device 80x25 Dec 12 17:35:54.821749 kernel: ACPI: Core revision 20240827 Dec 12 17:35:54.821756 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 12 17:35:54.821763 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:35:54.821769 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:35:54.821776 kernel: landlock: Up and running. Dec 12 17:35:54.821783 kernel: SELinux: Initializing. Dec 12 17:35:54.821789 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:35:54.821798 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:35:54.821805 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:35:54.821812 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:35:54.821819 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:35:54.821825 kernel: Remapping and enabling EFI services. Dec 12 17:35:54.821832 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:35:54.821839 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:35:54.821846 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 12 17:35:54.821853 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 12 17:35:54.821861 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:35:54.821873 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 12 17:35:54.821880 kernel: Detected PIPT I-cache on CPU2 Dec 12 17:35:54.821889 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 12 17:35:54.821896 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 12 17:35:54.821903 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:35:54.821910 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 12 17:35:54.821917 kernel: Detected PIPT I-cache on CPU3 Dec 12 17:35:54.821926 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 12 17:35:54.821933 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 12 17:35:54.821940 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:35:54.821947 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 12 17:35:54.821954 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 17:35:54.821961 kernel: SMP: Total of 4 processors activated. Dec 12 17:35:54.821968 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:35:54.821975 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:35:54.821982 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:35:54.821990 kernel: CPU features: detected: Common not Private translations Dec 12 17:35:54.821998 kernel: CPU features: detected: CRC32 instructions Dec 12 17:35:54.822005 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 12 17:35:54.822012 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:35:54.822020 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:35:54.822027 kernel: CPU features: detected: Privileged Access Never Dec 12 17:35:54.822034 kernel: CPU features: detected: RAS Extension Support Dec 12 17:35:54.822041 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:35:54.822048 kernel: alternatives: applying system-wide alternatives Dec 12 17:35:54.822055 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 12 17:35:54.822064 kernel: Memory: 2421668K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 128284K reserved, 16384K cma-reserved) Dec 12 17:35:54.822071 kernel: devtmpfs: initialized Dec 12 17:35:54.822079 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:35:54.822086 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 17:35:54.822093 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:35:54.822100 kernel: 0 pages in range for non-PLT usage Dec 12 17:35:54.822107 kernel: 508400 pages in range for PLT usage Dec 12 17:35:54.822114 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:35:54.822121 kernel: SMBIOS 3.0.0 present. Dec 12 17:35:54.822130 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 12 17:35:54.822137 kernel: DMI: Memory slots populated: 1/1 Dec 12 17:35:54.822144 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:35:54.822151 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:35:54.822158 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:35:54.822166 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:35:54.822173 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:35:54.822180 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Dec 12 17:35:54.822188 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:35:54.822196 kernel: cpuidle: using governor menu Dec 12 17:35:54.822204 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:35:54.822211 kernel: ASID allocator initialised with 32768 entries Dec 12 17:35:54.822218 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:35:54.822225 kernel: Serial: AMBA PL011 UART driver Dec 12 17:35:54.822232 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:35:54.822258 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:35:54.822269 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:35:54.822278 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:35:54.822289 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:35:54.822296 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:35:54.822303 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:35:54.822310 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:35:54.822317 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:35:54.822324 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:35:54.822331 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:35:54.822338 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:35:54.822345 kernel: ACPI: Interpreter enabled Dec 12 17:35:54.822355 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:35:54.822362 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:35:54.822369 kernel: ACPI: CPU0 has been hot-added Dec 12 17:35:54.822376 kernel: ACPI: CPU1 has been hot-added Dec 12 17:35:54.822383 kernel: ACPI: CPU2 has been hot-added Dec 12 17:35:54.822390 kernel: ACPI: CPU3 has been hot-added Dec 12 17:35:54.822397 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:35:54.822404 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:35:54.822411 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 17:35:54.822554 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:35:54.822630 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:35:54.822691 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:35:54.822763 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 12 17:35:54.822819 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 12 17:35:54.822828 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 12 17:35:54.822836 kernel: PCI host bridge to bus 0000:00 Dec 12 17:35:54.822906 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 12 17:35:54.822962 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:35:54.823016 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 12 17:35:54.823069 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 17:35:54.823145 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:35:54.823225 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 17:35:54.823306 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 12 17:35:54.823369 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 12 17:35:54.823428 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 12 17:35:54.823490 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 12 17:35:54.823551 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 12 17:35:54.823619 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 12 17:35:54.823673 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 12 17:35:54.823727 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:35:54.823778 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 12 17:35:54.823788 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:35:54.823795 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:35:54.823803 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:35:54.823809 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:35:54.823816 kernel: iommu: Default domain type: Translated Dec 12 17:35:54.823823 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:35:54.823830 kernel: efivars: Registered efivars operations Dec 12 17:35:54.823839 kernel: vgaarb: loaded Dec 12 17:35:54.823846 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:35:54.823853 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:35:54.823860 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:35:54.823867 kernel: pnp: PnP ACPI init Dec 12 17:35:54.823933 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 12 17:35:54.823943 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:35:54.823951 kernel: NET: Registered PF_INET protocol family Dec 12 17:35:54.823960 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:35:54.823967 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:35:54.823974 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:35:54.823981 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:35:54.823989 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:35:54.823996 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:35:54.824003 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:35:54.824010 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:35:54.824017 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:35:54.824026 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:35:54.824033 kernel: kvm [1]: HYP mode not available Dec 12 17:35:54.824041 kernel: Initialise system trusted keyrings Dec 12 17:35:54.824047 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:35:54.824054 kernel: Key type asymmetric registered Dec 12 17:35:54.824062 kernel: Asymmetric key parser 'x509' registered Dec 12 17:35:54.824069 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:35:54.824076 kernel: io scheduler mq-deadline registered Dec 12 17:35:54.824083 kernel: io scheduler kyber registered Dec 12 17:35:54.824091 kernel: io scheduler bfq registered Dec 12 17:35:54.824098 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:35:54.824105 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:35:54.824112 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:35:54.824174 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 12 17:35:54.824183 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:35:54.824190 kernel: thunder_xcv, ver 1.0 Dec 12 17:35:54.824197 kernel: thunder_bgx, ver 1.0 Dec 12 17:35:54.824204 kernel: nicpf, ver 1.0 Dec 12 17:35:54.824213 kernel: nicvf, ver 1.0 Dec 12 17:35:54.824303 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:35:54.824370 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:35:54 UTC (1765560954) Dec 12 17:35:54.824383 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:35:54.824390 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:35:54.824398 kernel: watchdog: NMI not fully supported Dec 12 17:35:54.824407 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:35:54.824418 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:35:54.824428 kernel: Segment Routing with IPv6 Dec 12 17:35:54.824436 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:35:54.824443 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:35:54.824451 kernel: Key type dns_resolver registered Dec 12 17:35:54.824458 kernel: registered taskstats version 1 Dec 12 17:35:54.824465 kernel: Loading compiled-in X.509 certificates Dec 12 17:35:54.824473 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 12 17:35:54.824482 kernel: Demotion targets for Node 0: null Dec 12 17:35:54.824490 kernel: Key type .fscrypt registered Dec 12 17:35:54.824499 kernel: Key type fscrypt-provisioning registered Dec 12 17:35:54.824508 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:35:54.824517 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:35:54.824525 kernel: ima: No architecture policies found Dec 12 17:35:54.824533 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:35:54.824542 kernel: clk: Disabling unused clocks Dec 12 17:35:54.824549 kernel: PM: genpd: Disabling unused power domains Dec 12 17:35:54.824569 kernel: Warning: unable to open an initial console. Dec 12 17:35:54.824578 kernel: Freeing unused kernel memory: 39552K Dec 12 17:35:54.824588 kernel: Run /init as init process Dec 12 17:35:54.824595 kernel: with arguments: Dec 12 17:35:54.824603 kernel: /init Dec 12 17:35:54.824610 kernel: with environment: Dec 12 17:35:54.824616 kernel: HOME=/ Dec 12 17:35:54.824623 kernel: TERM=linux Dec 12 17:35:54.824631 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:35:54.824641 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:35:54.824651 systemd[1]: Detected virtualization kvm. Dec 12 17:35:54.824659 systemd[1]: Detected architecture arm64. Dec 12 17:35:54.824667 systemd[1]: Running in initrd. Dec 12 17:35:54.824674 systemd[1]: No hostname configured, using default hostname. Dec 12 17:35:54.824682 systemd[1]: Hostname set to . Dec 12 17:35:54.824690 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:35:54.824698 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:35:54.824705 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:35:54.824715 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:35:54.824724 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:35:54.824733 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:35:54.824741 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:35:54.824750 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:35:54.824760 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 17:35:54.824770 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 17:35:54.824779 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:35:54.824787 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:35:54.824796 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:35:54.824804 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:35:54.824812 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:35:54.824821 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:35:54.824829 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:35:54.824838 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:35:54.824854 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:35:54.824870 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:35:54.824879 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:35:54.824887 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:35:54.824896 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:35:54.824905 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:35:54.824913 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:35:54.824921 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:35:54.824931 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:35:54.824940 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:35:54.824949 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:35:54.824957 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:35:54.824965 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:35:54.824973 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:35:54.824987 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:35:54.825003 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:35:54.825011 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:35:54.825020 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:35:54.825050 systemd-journald[244]: Collecting audit messages is disabled. Dec 12 17:35:54.825071 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:35:54.825081 systemd-journald[244]: Journal started Dec 12 17:35:54.825099 systemd-journald[244]: Runtime Journal (/run/log/journal/2129ff9e5e5c448d9472b1ee2633e7f9) is 6M, max 48.5M, 42.4M free. Dec 12 17:35:54.813795 systemd-modules-load[246]: Inserted module 'overlay' Dec 12 17:35:54.830897 kernel: Bridge firewalling registered Dec 12 17:35:54.830920 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:35:54.828982 systemd-modules-load[246]: Inserted module 'br_netfilter' Dec 12 17:35:54.832860 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:35:54.835722 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:35:54.837263 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:35:54.842328 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:35:54.844371 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:35:54.846718 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:35:54.858384 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:35:54.866990 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:35:54.870086 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:35:54.874396 systemd-tmpfiles[275]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:35:54.879294 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:35:54.883537 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:35:54.884863 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:35:54.887898 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:35:54.912413 dracut-cmdline[293]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:35:54.931188 systemd-resolved[292]: Positive Trust Anchors: Dec 12 17:35:54.931208 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:35:54.931250 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:35:54.936205 systemd-resolved[292]: Defaulting to hostname 'linux'. Dec 12 17:35:54.937337 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:35:54.942678 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:35:55.001269 kernel: SCSI subsystem initialized Dec 12 17:35:55.006257 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:35:55.014269 kernel: iscsi: registered transport (tcp) Dec 12 17:35:55.027274 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:35:55.027304 kernel: QLogic iSCSI HBA Driver Dec 12 17:35:55.046301 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:35:55.067091 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:35:55.070070 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:35:55.115743 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:35:55.118001 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:35:55.176288 kernel: raid6: neonx8 gen() 15717 MB/s Dec 12 17:35:55.193269 kernel: raid6: neonx4 gen() 15748 MB/s Dec 12 17:35:55.210270 kernel: raid6: neonx2 gen() 13100 MB/s Dec 12 17:35:55.227282 kernel: raid6: neonx1 gen() 10395 MB/s Dec 12 17:35:55.244270 kernel: raid6: int64x8 gen() 6880 MB/s Dec 12 17:35:55.261272 kernel: raid6: int64x4 gen() 7311 MB/s Dec 12 17:35:55.278267 kernel: raid6: int64x2 gen() 6048 MB/s Dec 12 17:35:55.295489 kernel: raid6: int64x1 gen() 5009 MB/s Dec 12 17:35:55.295509 kernel: raid6: using algorithm neonx4 gen() 15748 MB/s Dec 12 17:35:55.313542 kernel: raid6: .... xor() 12265 MB/s, rmw enabled Dec 12 17:35:55.313557 kernel: raid6: using neon recovery algorithm Dec 12 17:35:55.319790 kernel: xor: measuring software checksum speed Dec 12 17:35:55.319815 kernel: 8regs : 21641 MB/sec Dec 12 17:35:55.319826 kernel: 32regs : 21670 MB/sec Dec 12 17:35:55.320466 kernel: arm64_neon : 27965 MB/sec Dec 12 17:35:55.320479 kernel: xor: using function: arm64_neon (27965 MB/sec) Dec 12 17:35:55.374265 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:35:55.380751 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:35:55.383464 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:35:55.410045 systemd-udevd[501]: Using default interface naming scheme 'v255'. Dec 12 17:35:55.414225 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:35:55.416798 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:35:55.449138 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Dec 12 17:35:55.473973 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:35:55.476877 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:35:55.530401 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:35:55.533359 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:35:55.582267 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 12 17:35:55.582419 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 12 17:35:55.591544 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:35:55.591613 kernel: GPT:9289727 != 19775487 Dec 12 17:35:55.591628 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:35:55.593267 kernel: GPT:9289727 != 19775487 Dec 12 17:35:55.593691 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:35:55.595282 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:35:55.597133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:35:55.597281 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:35:55.599963 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:35:55.603828 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:35:55.627866 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 17:35:55.634686 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:35:55.636217 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:35:55.648086 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 17:35:55.662954 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:35:55.670388 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 12 17:35:55.671788 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 17:35:55.675187 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:35:55.677691 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:35:55.680265 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:35:55.683424 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:35:55.685553 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:35:55.704992 disk-uuid[594]: Primary Header is updated. Dec 12 17:35:55.704992 disk-uuid[594]: Secondary Entries is updated. Dec 12 17:35:55.704992 disk-uuid[594]: Secondary Header is updated. Dec 12 17:35:55.707492 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:35:55.711955 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:35:56.717273 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:35:56.717853 disk-uuid[600]: The operation has completed successfully. Dec 12 17:35:56.742032 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:35:56.742138 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:35:56.776097 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 17:35:56.799445 sh[614]: Success Dec 12 17:35:56.819972 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:35:56.820018 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:35:56.820392 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:35:56.837938 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:35:56.888974 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:35:56.912947 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 17:35:56.915716 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 17:35:56.928295 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (626) Dec 12 17:35:56.930776 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 12 17:35:56.930803 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:35:56.937359 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:35:56.937418 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:35:56.938465 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 17:35:56.940023 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:35:56.941575 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:35:56.943424 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:35:56.947144 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:35:56.969429 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (657) Dec 12 17:35:56.969488 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:35:56.969506 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:35:56.974004 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:35:56.974065 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:35:56.979313 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:35:56.981321 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:35:56.984064 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:35:57.079296 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:35:57.082974 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:35:57.103654 ignition[699]: Ignition 2.22.0 Dec 12 17:35:57.103668 ignition[699]: Stage: fetch-offline Dec 12 17:35:57.103700 ignition[699]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:35:57.103707 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:35:57.103783 ignition[699]: parsed url from cmdline: "" Dec 12 17:35:57.103786 ignition[699]: no config URL provided Dec 12 17:35:57.103791 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:35:57.103798 ignition[699]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:35:57.103820 ignition[699]: op(1): [started] loading QEMU firmware config module Dec 12 17:35:57.103825 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 12 17:35:57.110372 ignition[699]: op(1): [finished] loading QEMU firmware config module Dec 12 17:35:57.125758 systemd-networkd[806]: lo: Link UP Dec 12 17:35:57.125768 systemd-networkd[806]: lo: Gained carrier Dec 12 17:35:57.126491 systemd-networkd[806]: Enumeration completed Dec 12 17:35:57.127109 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:35:57.127113 systemd-networkd[806]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:35:57.128126 systemd-networkd[806]: eth0: Link UP Dec 12 17:35:57.128222 systemd-networkd[806]: eth0: Gained carrier Dec 12 17:35:57.128232 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:35:57.128715 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:35:57.131689 systemd[1]: Reached target network.target - Network. Dec 12 17:35:57.147309 systemd-networkd[806]: eth0: DHCPv4 address 10.0.0.78/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:35:57.169237 ignition[699]: parsing config with SHA512: 3e5174bbe3a21fe2cf4ee434c0b88a1b682a82c8d1b56672e253e6574db2e4b4237d0aaee8dc10b467adab367233afe58164a4ea435f509a96f97e299b962018 Dec 12 17:35:57.174985 unknown[699]: fetched base config from "system" Dec 12 17:35:57.174996 unknown[699]: fetched user config from "qemu" Dec 12 17:35:57.175376 ignition[699]: fetch-offline: fetch-offline passed Dec 12 17:35:57.175433 ignition[699]: Ignition finished successfully Dec 12 17:35:57.178378 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:35:57.180547 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 12 17:35:57.181359 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:35:57.222337 ignition[814]: Ignition 2.22.0 Dec 12 17:35:57.222352 ignition[814]: Stage: kargs Dec 12 17:35:57.222482 ignition[814]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:35:57.222491 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:35:57.223266 ignition[814]: kargs: kargs passed Dec 12 17:35:57.223317 ignition[814]: Ignition finished successfully Dec 12 17:35:57.229640 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:35:57.231835 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:35:57.266645 ignition[823]: Ignition 2.22.0 Dec 12 17:35:57.266661 ignition[823]: Stage: disks Dec 12 17:35:57.266811 ignition[823]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:35:57.266820 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:35:57.267993 ignition[823]: disks: disks passed Dec 12 17:35:57.269945 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:35:57.268048 ignition[823]: Ignition finished successfully Dec 12 17:35:57.271677 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:35:57.273337 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:35:57.275494 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:35:57.277165 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:35:57.279697 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:35:57.282859 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:35:57.309936 systemd-fsck[833]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 17:35:57.314523 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:35:57.316912 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:35:57.387271 kernel: EXT4-fs (vda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 12 17:35:57.388134 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:35:57.389819 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:35:57.392898 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:35:57.404943 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:35:57.406232 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:35:57.406290 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:35:57.406318 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:35:57.419609 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (841) Dec 12 17:35:57.419648 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:35:57.419660 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:35:57.414337 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:35:57.418315 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:35:57.425015 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:35:57.425052 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:35:57.426106 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:35:57.465948 initrd-setup-root[866]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:35:57.470396 initrd-setup-root[873]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:35:57.474925 initrd-setup-root[880]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:35:57.479286 initrd-setup-root[887]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:35:57.551206 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:35:57.555633 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:35:57.557297 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:35:57.584309 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:35:57.602455 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:35:57.619847 ignition[956]: INFO : Ignition 2.22.0 Dec 12 17:35:57.619847 ignition[956]: INFO : Stage: mount Dec 12 17:35:57.621769 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:35:57.621769 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:35:57.621769 ignition[956]: INFO : mount: mount passed Dec 12 17:35:57.621769 ignition[956]: INFO : Ignition finished successfully Dec 12 17:35:57.624300 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:35:57.626617 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:35:57.926676 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:35:57.928167 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:35:57.947267 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (968) Dec 12 17:35:57.949537 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:35:57.949595 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:35:57.952263 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:35:57.952290 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:35:57.953656 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:35:57.985099 ignition[985]: INFO : Ignition 2.22.0 Dec 12 17:35:57.985099 ignition[985]: INFO : Stage: files Dec 12 17:35:57.987115 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:35:57.987115 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:35:57.987115 ignition[985]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:35:57.991253 ignition[985]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:35:57.991253 ignition[985]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:35:57.991253 ignition[985]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:35:57.991253 ignition[985]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:35:57.991253 ignition[985]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:35:57.990103 unknown[985]: wrote ssh authorized keys file for user: core Dec 12 17:35:58.000337 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 12 17:35:58.000337 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 12 17:35:58.062909 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:35:58.145312 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 12 17:35:58.145312 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 17:35:58.149503 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 12 17:35:58.287455 systemd-networkd[806]: eth0: Gained IPv6LL Dec 12 17:35:58.343191 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 12 17:35:58.430447 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 17:35:58.432605 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:35:58.432605 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:35:58.432605 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:35:58.432605 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:35:58.432605 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:35:58.432605 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:35:58.432605 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:35:58.432605 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:35:58.447780 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:35:58.447780 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:35:58.447780 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:35:58.447780 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:35:58.447780 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:35:58.447780 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 12 17:35:58.684492 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 12 17:35:58.935098 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:35:58.935098 ignition[985]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 12 17:35:58.939163 ignition[985]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:35:58.941367 ignition[985]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:35:58.941367 ignition[985]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 12 17:35:58.941367 ignition[985]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 12 17:35:58.941367 ignition[985]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:35:58.941367 ignition[985]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:35:58.941367 ignition[985]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 12 17:35:58.941367 ignition[985]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 12 17:35:58.960580 ignition[985]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:35:58.964446 ignition[985]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:35:58.967344 ignition[985]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 12 17:35:58.967344 ignition[985]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:35:58.967344 ignition[985]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:35:58.967344 ignition[985]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:35:58.967344 ignition[985]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:35:58.967344 ignition[985]: INFO : files: files passed Dec 12 17:35:58.967344 ignition[985]: INFO : Ignition finished successfully Dec 12 17:35:58.968124 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:35:58.972667 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:35:58.975530 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:35:58.995779 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:35:58.995892 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:35:59.000189 initrd-setup-root-after-ignition[1014]: grep: /sysroot/oem/oem-release: No such file or directory Dec 12 17:35:59.001794 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:35:59.001794 initrd-setup-root-after-ignition[1016]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:35:59.005334 initrd-setup-root-after-ignition[1020]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:35:59.005832 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:35:59.009711 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:35:59.011728 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:35:59.080274 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:35:59.080409 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:35:59.081988 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:35:59.083882 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:35:59.085069 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:35:59.085922 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:35:59.112293 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:35:59.115046 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:35:59.134048 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:35:59.135482 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:35:59.139578 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:35:59.141622 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:35:59.141765 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:35:59.144363 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:35:59.146431 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:35:59.148163 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:35:59.149998 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:35:59.152180 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:35:59.154425 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:35:59.156790 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:35:59.161742 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:35:59.164087 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:35:59.167977 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:35:59.171198 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:35:59.173282 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:35:59.173433 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:35:59.183480 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:35:59.185677 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:35:59.187838 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:35:59.187952 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:35:59.190163 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:35:59.190313 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:35:59.193456 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:35:59.193609 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:35:59.196286 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:35:59.198089 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:35:59.198268 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:35:59.200422 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:35:59.203322 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:35:59.205524 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:35:59.205636 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:35:59.207531 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:35:59.207626 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:35:59.209703 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:35:59.209819 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:35:59.212579 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:35:59.212682 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:35:59.215266 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:35:59.216962 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:35:59.218162 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:35:59.218315 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:35:59.220472 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:35:59.220592 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:35:59.227925 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:35:59.229633 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:35:59.240641 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:35:59.248657 ignition[1040]: INFO : Ignition 2.22.0 Dec 12 17:35:59.248657 ignition[1040]: INFO : Stage: umount Dec 12 17:35:59.250430 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:35:59.250430 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:35:59.250430 ignition[1040]: INFO : umount: umount passed Dec 12 17:35:59.250430 ignition[1040]: INFO : Ignition finished successfully Dec 12 17:35:59.251857 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:35:59.251955 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:35:59.253878 systemd[1]: Stopped target network.target - Network. Dec 12 17:35:59.256374 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:35:59.256435 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:35:59.258295 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:35:59.258343 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:35:59.260366 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:35:59.260424 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:35:59.262175 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:35:59.262219 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:35:59.264448 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:35:59.266221 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:35:59.273349 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:35:59.273475 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:35:59.276948 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 17:35:59.277185 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:35:59.277228 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:35:59.282664 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:35:59.284710 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:35:59.284805 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:35:59.289734 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 17:35:59.289860 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:35:59.292209 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:35:59.292262 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:35:59.295166 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:35:59.296394 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:35:59.296460 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:35:59.298878 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:35:59.298930 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:35:59.303728 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:35:59.303779 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:35:59.306403 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:35:59.309864 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 17:35:59.315050 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:35:59.315151 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:35:59.317536 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:35:59.317598 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:35:59.325926 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:35:59.326047 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:35:59.330903 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:35:59.332316 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:35:59.333901 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:35:59.333938 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:35:59.335958 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:35:59.335990 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:35:59.337885 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:35:59.337940 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:35:59.340800 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:35:59.340855 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:35:59.343638 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:35:59.343692 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:35:59.346866 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:35:59.348142 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:35:59.348203 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:35:59.351396 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:35:59.351440 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:35:59.355128 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 17:35:59.355175 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:35:59.358933 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:35:59.358976 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:35:59.361410 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:35:59.361460 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:35:59.365483 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:35:59.367275 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:35:59.369448 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:35:59.372202 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:35:59.403016 systemd[1]: Switching root. Dec 12 17:35:59.441340 systemd-journald[244]: Journal stopped Dec 12 17:36:00.263324 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Dec 12 17:36:00.263376 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:36:00.263388 kernel: SELinux: policy capability open_perms=1 Dec 12 17:36:00.263397 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:36:00.263410 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:36:00.263420 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:36:00.263429 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:36:00.263440 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:36:00.263451 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:36:00.263460 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:36:00.263469 kernel: audit: type=1403 audit(1765560959.618:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 17:36:00.263485 systemd[1]: Successfully loaded SELinux policy in 46.615ms. Dec 12 17:36:00.263505 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.383ms. Dec 12 17:36:00.263516 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:36:00.263527 systemd[1]: Detected virtualization kvm. Dec 12 17:36:00.263538 systemd[1]: Detected architecture arm64. Dec 12 17:36:00.263561 systemd[1]: Detected first boot. Dec 12 17:36:00.263573 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:36:00.263584 zram_generator::config[1087]: No configuration found. Dec 12 17:36:00.263598 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:36:00.263609 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:36:00.263619 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 17:36:00.263632 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:36:00.263642 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:36:00.263653 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:36:00.263666 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:36:00.263677 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:36:00.263686 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:36:00.263696 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:36:00.263706 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:36:00.263717 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:36:00.263727 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:36:00.263738 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:36:00.263748 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:36:00.263758 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:36:00.263768 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:36:00.263778 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:36:00.263788 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:36:00.263798 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:36:00.263808 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:36:00.263818 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:36:00.263830 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:36:00.263840 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:36:00.263850 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:36:00.263860 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:36:00.263870 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:36:00.263880 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:36:00.263890 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:36:00.263900 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:36:00.263911 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:36:00.263921 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:36:00.263931 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:36:00.263941 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:36:00.263952 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:36:00.263961 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:36:00.263971 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:36:00.263981 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:36:00.263990 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:36:00.264001 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:36:00.264012 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:36:00.264021 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:36:00.264031 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:36:00.264041 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:36:00.264051 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:36:00.264061 systemd[1]: Reached target machines.target - Containers. Dec 12 17:36:00.264071 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:36:00.264080 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:36:00.264091 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:36:00.264101 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:36:00.264111 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:36:00.264121 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:36:00.264131 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:36:00.264141 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:36:00.264151 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:36:00.264161 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:36:00.264173 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:36:00.264182 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:36:00.264192 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:36:00.264202 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:36:00.264212 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:36:00.264222 kernel: loop: module loaded Dec 12 17:36:00.264231 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:36:00.264251 kernel: fuse: init (API version 7.41) Dec 12 17:36:00.264264 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:36:00.264276 kernel: ACPI: bus type drm_connector registered Dec 12 17:36:00.264285 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:36:00.264295 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:36:00.264305 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:36:00.264315 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:36:00.264325 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 17:36:00.264335 systemd[1]: Stopped verity-setup.service. Dec 12 17:36:00.264347 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:36:00.264356 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:36:00.264366 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:36:00.264376 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:36:00.264388 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:36:00.264423 systemd-journald[1155]: Collecting audit messages is disabled. Dec 12 17:36:00.264446 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:36:00.264457 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:36:00.264469 systemd-journald[1155]: Journal started Dec 12 17:36:00.264492 systemd-journald[1155]: Runtime Journal (/run/log/journal/2129ff9e5e5c448d9472b1ee2633e7f9) is 6M, max 48.5M, 42.4M free. Dec 12 17:36:00.013628 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:36:00.037567 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 17:36:00.037990 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:36:00.268293 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:36:00.269264 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:36:00.271661 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:36:00.273291 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:36:00.274867 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:36:00.275024 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:36:00.276582 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:36:00.276755 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:36:00.278298 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:36:00.278454 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:36:00.279996 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:36:00.280168 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:36:00.281776 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:36:00.281923 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:36:00.283442 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:36:00.284977 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:36:00.286681 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:36:00.289386 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:36:00.301063 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:36:00.303508 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:36:00.305700 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:36:00.307053 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:36:00.307089 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:36:00.309166 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:36:00.318139 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:36:00.319833 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:36:00.320945 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:36:00.323043 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:36:00.324531 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:36:00.327377 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:36:00.328828 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:36:00.330831 systemd-journald[1155]: Time spent on flushing to /var/log/journal/2129ff9e5e5c448d9472b1ee2633e7f9 is 42.008ms for 883 entries. Dec 12 17:36:00.330831 systemd-journald[1155]: System Journal (/var/log/journal/2129ff9e5e5c448d9472b1ee2633e7f9) is 8M, max 195.6M, 187.6M free. Dec 12 17:36:00.378417 systemd-journald[1155]: Received client request to flush runtime journal. Dec 12 17:36:00.378458 kernel: loop0: detected capacity change from 0 to 100632 Dec 12 17:36:00.329807 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:36:00.335960 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:36:00.339467 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:36:00.344286 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:36:00.345992 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:36:00.348440 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:36:00.380371 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:36:00.362796 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:36:00.364652 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:36:00.371352 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:36:00.374714 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:36:00.384918 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Dec 12 17:36:00.384939 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Dec 12 17:36:00.385495 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:36:00.391976 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:36:00.396316 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:36:00.410270 kernel: loop1: detected capacity change from 0 to 119840 Dec 12 17:36:00.438289 kernel: loop2: detected capacity change from 0 to 207008 Dec 12 17:36:00.463339 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:36:00.466070 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:36:00.475991 kernel: loop3: detected capacity change from 0 to 100632 Dec 12 17:36:00.490270 kernel: loop4: detected capacity change from 0 to 119840 Dec 12 17:36:00.488526 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:36:00.489653 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Dec 12 17:36:00.489663 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Dec 12 17:36:00.493446 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:36:00.504276 kernel: loop5: detected capacity change from 0 to 207008 Dec 12 17:36:00.512786 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 12 17:36:00.513226 (sd-merge)[1226]: Merged extensions into '/usr'. Dec 12 17:36:00.518357 systemd[1]: Reload requested from client PID 1203 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:36:00.518380 systemd[1]: Reloading... Dec 12 17:36:00.581265 zram_generator::config[1255]: No configuration found. Dec 12 17:36:00.695842 ldconfig[1198]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:36:00.729361 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:36:00.729668 systemd[1]: Reloading finished in 210 ms. Dec 12 17:36:00.747116 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:36:00.750285 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:36:00.768539 systemd[1]: Starting ensure-sysext.service... Dec 12 17:36:00.770697 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:36:00.781286 systemd[1]: Reload requested from client PID 1290 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:36:00.781351 systemd[1]: Reloading... Dec 12 17:36:00.793351 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:36:00.793383 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:36:00.793671 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:36:00.793853 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 17:36:00.794472 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 17:36:00.794683 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Dec 12 17:36:00.794730 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Dec 12 17:36:00.800301 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:36:00.800316 systemd-tmpfiles[1291]: Skipping /boot Dec 12 17:36:00.806793 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:36:00.806810 systemd-tmpfiles[1291]: Skipping /boot Dec 12 17:36:00.837397 zram_generator::config[1318]: No configuration found. Dec 12 17:36:00.968524 systemd[1]: Reloading finished in 186 ms. Dec 12 17:36:00.991069 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:36:00.998325 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:36:01.007360 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:36:01.010165 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:36:01.012997 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:36:01.016146 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:36:01.020049 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:36:01.022722 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:36:01.029047 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:36:01.036447 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:36:01.039868 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:36:01.043115 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:36:01.044665 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:36:01.044795 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:36:01.046937 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:36:01.049072 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:36:01.049230 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:36:01.052005 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:36:01.052152 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:36:01.060517 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:36:01.062293 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:36:01.067474 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:36:01.068866 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:36:01.073583 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:36:01.074034 systemd-udevd[1359]: Using default interface naming scheme 'v255'. Dec 12 17:36:01.075441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:36:01.075570 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:36:01.075651 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:36:01.076989 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:36:01.080004 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:36:01.083647 augenrules[1389]: No rules Dec 12 17:36:01.085477 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:36:01.087856 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:36:01.088074 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:36:01.090026 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:36:01.092142 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:36:01.092329 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:36:01.094351 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:36:01.094489 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:36:01.097891 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:36:01.099639 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:36:01.113624 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:36:01.117452 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:36:01.118654 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:36:01.124101 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:36:01.137628 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:36:01.147102 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:36:01.149013 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:36:01.149145 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:36:01.157496 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:36:01.158774 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:36:01.160002 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:36:01.163315 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:36:01.169811 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:36:01.172703 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:36:01.172877 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:36:01.179078 augenrules[1427]: /sbin/augenrules: No change Dec 12 17:36:01.185153 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:36:01.185690 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:36:01.187617 augenrules[1456]: No rules Dec 12 17:36:01.188207 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:36:01.188386 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:36:01.191055 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:36:01.191230 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:36:01.199013 systemd[1]: Finished ensure-sysext.service. Dec 12 17:36:01.212553 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:36:01.212615 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:36:01.216476 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 17:36:01.243982 systemd-resolved[1358]: Positive Trust Anchors: Dec 12 17:36:01.243998 systemd-resolved[1358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:36:01.244029 systemd-resolved[1358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:36:01.244577 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:36:01.253099 systemd-resolved[1358]: Defaulting to hostname 'linux'. Dec 12 17:36:01.256106 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:36:01.257926 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:36:01.259374 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:36:01.262290 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:36:01.268175 systemd-networkd[1444]: lo: Link UP Dec 12 17:36:01.268184 systemd-networkd[1444]: lo: Gained carrier Dec 12 17:36:01.269085 systemd-networkd[1444]: Enumeration completed Dec 12 17:36:01.269207 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:36:01.269818 systemd-networkd[1444]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:36:01.269826 systemd-networkd[1444]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:36:01.270449 systemd-networkd[1444]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:36:01.270476 systemd-networkd[1444]: eth0: Link UP Dec 12 17:36:01.270594 systemd[1]: Reached target network.target - Network. Dec 12 17:36:01.270600 systemd-networkd[1444]: eth0: Gained carrier Dec 12 17:36:01.270611 systemd-networkd[1444]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:36:01.279236 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:36:01.281828 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:36:01.285333 systemd-networkd[1444]: eth0: DHCPv4 address 10.0.0.78/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:36:01.294456 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:36:01.306486 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 17:36:01.307198 systemd-timesyncd[1471]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 12 17:36:01.307368 systemd-timesyncd[1471]: Initial clock synchronization to Fri 2025-12-12 17:36:01.238940 UTC. Dec 12 17:36:01.308797 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:36:01.310517 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:36:01.311828 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:36:01.313180 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:36:01.314547 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:36:01.315884 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:36:01.315917 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:36:01.317063 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:36:01.318438 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:36:01.319828 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:36:01.321188 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:36:01.322945 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:36:01.325555 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:36:01.328666 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:36:01.330894 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:36:01.332233 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:36:01.335135 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:36:01.336619 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:36:01.338554 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:36:01.339783 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:36:01.340826 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:36:01.341911 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:36:01.341942 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:36:01.344360 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:36:01.347412 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:36:01.352345 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:36:01.363151 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:36:01.367402 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:36:01.368509 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:36:01.371453 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:36:01.373523 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:36:01.375110 jq[1503]: false Dec 12 17:36:01.379394 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:36:01.382200 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:36:01.386100 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:36:01.388262 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:36:01.388738 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:36:01.391341 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:36:01.393405 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:36:01.396161 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:36:01.399525 extend-filesystems[1504]: Found /dev/vda6 Dec 12 17:36:01.401634 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:36:01.406046 jq[1515]: true Dec 12 17:36:01.403315 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:36:01.406752 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:36:01.406932 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:36:01.411983 extend-filesystems[1504]: Found /dev/vda9 Dec 12 17:36:01.413952 extend-filesystems[1504]: Checking size of /dev/vda9 Dec 12 17:36:01.417690 update_engine[1514]: I20251212 17:36:01.417451 1514 main.cc:92] Flatcar Update Engine starting Dec 12 17:36:01.431615 extend-filesystems[1504]: Resized partition /dev/vda9 Dec 12 17:36:01.433683 extend-filesystems[1542]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:36:01.438002 tar[1519]: linux-arm64/LICENSE Dec 12 17:36:01.438002 tar[1519]: linux-arm64/helm Dec 12 17:36:01.439646 (ntainerd)[1539]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 17:36:01.441307 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 12 17:36:01.447549 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:36:01.448050 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:36:01.449413 dbus-daemon[1494]: [system] SELinux support is enabled Dec 12 17:36:01.452177 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:36:01.453143 update_engine[1514]: I20251212 17:36:01.453076 1514 update_check_scheduler.cc:74] Next update check in 8m43s Dec 12 17:36:01.459192 jq[1534]: true Dec 12 17:36:01.458421 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:36:01.462108 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:36:01.462168 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:36:01.464797 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:36:01.464825 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:36:01.468699 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:36:01.496996 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:36:01.500587 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 12 17:36:01.532738 extend-filesystems[1542]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 17:36:01.532738 extend-filesystems[1542]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 12 17:36:01.532738 extend-filesystems[1542]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 12 17:36:01.542609 extend-filesystems[1504]: Resized filesystem in /dev/vda9 Dec 12 17:36:01.542617 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:36:01.544344 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:36:01.547702 bash[1574]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:36:01.560605 systemd-logind[1512]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:36:01.561431 systemd-logind[1512]: New seat seat0. Dec 12 17:36:01.572769 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:36:01.574559 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:36:01.576207 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:36:01.583976 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 17:36:01.584484 locksmithd[1546]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:36:01.639902 containerd[1539]: time="2025-12-12T17:36:01Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:36:01.640968 containerd[1539]: time="2025-12-12T17:36:01.640934040Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 17:36:01.657011 containerd[1539]: time="2025-12-12T17:36:01.656961840Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.12µs" Dec 12 17:36:01.658191 containerd[1539]: time="2025-12-12T17:36:01.657122960Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:36:01.658191 containerd[1539]: time="2025-12-12T17:36:01.657148680Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:36:01.658191 containerd[1539]: time="2025-12-12T17:36:01.657317440Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:36:01.658191 containerd[1539]: time="2025-12-12T17:36:01.657334280Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:36:01.658191 containerd[1539]: time="2025-12-12T17:36:01.657356960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:36:01.658191 containerd[1539]: time="2025-12-12T17:36:01.657406120Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:36:01.658191 containerd[1539]: time="2025-12-12T17:36:01.657416400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:36:01.658191 containerd[1539]: time="2025-12-12T17:36:01.657655040Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:36:01.658191 containerd[1539]: time="2025-12-12T17:36:01.657669520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:36:01.658191 containerd[1539]: time="2025-12-12T17:36:01.657680320Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:36:01.658191 containerd[1539]: time="2025-12-12T17:36:01.657688320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:36:01.658191 containerd[1539]: time="2025-12-12T17:36:01.657754200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:36:01.658453 containerd[1539]: time="2025-12-12T17:36:01.657949680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:36:01.658453 containerd[1539]: time="2025-12-12T17:36:01.657978120Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:36:01.658453 containerd[1539]: time="2025-12-12T17:36:01.657987720Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:36:01.658453 containerd[1539]: time="2025-12-12T17:36:01.658027400Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:36:01.658453 containerd[1539]: time="2025-12-12T17:36:01.658284600Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:36:01.658453 containerd[1539]: time="2025-12-12T17:36:01.658383160Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:36:01.662824 containerd[1539]: time="2025-12-12T17:36:01.662788800Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:36:01.662954 containerd[1539]: time="2025-12-12T17:36:01.662940000Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:36:01.663066 containerd[1539]: time="2025-12-12T17:36:01.663053400Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:36:01.663123 containerd[1539]: time="2025-12-12T17:36:01.663111440Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:36:01.663190 containerd[1539]: time="2025-12-12T17:36:01.663177680Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:36:01.663238 containerd[1539]: time="2025-12-12T17:36:01.663226840Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:36:01.663327 containerd[1539]: time="2025-12-12T17:36:01.663312480Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:36:01.663376 containerd[1539]: time="2025-12-12T17:36:01.663364840Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:36:01.663424 containerd[1539]: time="2025-12-12T17:36:01.663412560Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:36:01.663473 containerd[1539]: time="2025-12-12T17:36:01.663462160Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:36:01.663527 containerd[1539]: time="2025-12-12T17:36:01.663514920Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:36:01.663595 containerd[1539]: time="2025-12-12T17:36:01.663581280Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:36:01.663773 containerd[1539]: time="2025-12-12T17:36:01.663752120Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:36:01.663839 containerd[1539]: time="2025-12-12T17:36:01.663826200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:36:01.663890 containerd[1539]: time="2025-12-12T17:36:01.663878840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:36:01.663937 containerd[1539]: time="2025-12-12T17:36:01.663925760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:36:01.663986 containerd[1539]: time="2025-12-12T17:36:01.663974640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:36:01.664035 containerd[1539]: time="2025-12-12T17:36:01.664023800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:36:01.664102 containerd[1539]: time="2025-12-12T17:36:01.664087640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:36:01.664160 containerd[1539]: time="2025-12-12T17:36:01.664147000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:36:01.664211 containerd[1539]: time="2025-12-12T17:36:01.664200080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:36:01.664295 containerd[1539]: time="2025-12-12T17:36:01.664281000Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:36:01.664347 containerd[1539]: time="2025-12-12T17:36:01.664335480Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:36:01.664601 containerd[1539]: time="2025-12-12T17:36:01.664582640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:36:01.664664 containerd[1539]: time="2025-12-12T17:36:01.664651960Z" level=info msg="Start snapshots syncer" Dec 12 17:36:01.664737 containerd[1539]: time="2025-12-12T17:36:01.664725240Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:36:01.665166 containerd[1539]: time="2025-12-12T17:36:01.665123320Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:36:01.665351 containerd[1539]: time="2025-12-12T17:36:01.665332320Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:36:01.665467 containerd[1539]: time="2025-12-12T17:36:01.665451720Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:36:01.665758 containerd[1539]: time="2025-12-12T17:36:01.665734760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:36:01.665835 containerd[1539]: time="2025-12-12T17:36:01.665821280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:36:01.665885 containerd[1539]: time="2025-12-12T17:36:01.665873280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:36:01.665936 containerd[1539]: time="2025-12-12T17:36:01.665924600Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:36:01.665988 containerd[1539]: time="2025-12-12T17:36:01.665976200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:36:01.666037 containerd[1539]: time="2025-12-12T17:36:01.666025920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:36:01.666102 containerd[1539]: time="2025-12-12T17:36:01.666089320Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:36:01.666170 containerd[1539]: time="2025-12-12T17:36:01.666157040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:36:01.666222 containerd[1539]: time="2025-12-12T17:36:01.666209840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:36:01.666292 containerd[1539]: time="2025-12-12T17:36:01.666279600Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:36:01.666377 containerd[1539]: time="2025-12-12T17:36:01.666362000Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:36:01.666494 containerd[1539]: time="2025-12-12T17:36:01.666478560Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:36:01.666779 containerd[1539]: time="2025-12-12T17:36:01.666562520Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:36:01.666779 containerd[1539]: time="2025-12-12T17:36:01.666582400Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:36:01.666779 containerd[1539]: time="2025-12-12T17:36:01.666591040Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:36:01.666779 containerd[1539]: time="2025-12-12T17:36:01.666602360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:36:01.666779 containerd[1539]: time="2025-12-12T17:36:01.666620520Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:36:01.666779 containerd[1539]: time="2025-12-12T17:36:01.666697360Z" level=info msg="runtime interface created" Dec 12 17:36:01.666779 containerd[1539]: time="2025-12-12T17:36:01.666702400Z" level=info msg="created NRI interface" Dec 12 17:36:01.666779 containerd[1539]: time="2025-12-12T17:36:01.666710320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:36:01.666779 containerd[1539]: time="2025-12-12T17:36:01.666721960Z" level=info msg="Connect containerd service" Dec 12 17:36:01.666779 containerd[1539]: time="2025-12-12T17:36:01.666746040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:36:01.667883 containerd[1539]: time="2025-12-12T17:36:01.667851120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:36:01.737925 containerd[1539]: time="2025-12-12T17:36:01.737858880Z" level=info msg="Start subscribing containerd event" Dec 12 17:36:01.738265 containerd[1539]: time="2025-12-12T17:36:01.738067680Z" level=info msg="Start recovering state" Dec 12 17:36:01.738265 containerd[1539]: time="2025-12-12T17:36:01.738116160Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:36:01.738265 containerd[1539]: time="2025-12-12T17:36:01.738156760Z" level=info msg="Start event monitor" Dec 12 17:36:01.738265 containerd[1539]: time="2025-12-12T17:36:01.738166520Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:36:01.738265 containerd[1539]: time="2025-12-12T17:36:01.738170480Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:36:01.738265 containerd[1539]: time="2025-12-12T17:36:01.738188760Z" level=info msg="Start streaming server" Dec 12 17:36:01.738265 containerd[1539]: time="2025-12-12T17:36:01.738197160Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:36:01.738265 containerd[1539]: time="2025-12-12T17:36:01.738204080Z" level=info msg="runtime interface starting up..." Dec 12 17:36:01.738265 containerd[1539]: time="2025-12-12T17:36:01.738208800Z" level=info msg="starting plugins..." Dec 12 17:36:01.738265 containerd[1539]: time="2025-12-12T17:36:01.738223160Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:36:01.738639 containerd[1539]: time="2025-12-12T17:36:01.738624880Z" level=info msg="containerd successfully booted in 0.099088s" Dec 12 17:36:01.738729 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:36:01.753568 sshd_keygen[1538]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:36:01.775299 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:36:01.778392 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:36:01.795350 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:36:01.795613 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:36:01.798524 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:36:01.817312 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:36:01.820528 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:36:01.823132 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:36:01.824717 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:36:01.832506 tar[1519]: linux-arm64/README.md Dec 12 17:36:01.849694 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:36:02.575443 systemd-networkd[1444]: eth0: Gained IPv6LL Dec 12 17:36:02.578134 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:36:02.580100 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:36:02.582736 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 17:36:02.585130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:36:02.596203 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:36:02.613494 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 17:36:02.615304 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 17:36:02.617711 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:36:02.620299 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:36:03.177257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:36:03.179170 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:36:03.181288 systemd[1]: Startup finished in 2.124s (kernel) + 5.014s (initrd) + 3.609s (userspace) = 10.749s. Dec 12 17:36:03.182501 (kubelet)[1641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:36:03.559680 kubelet[1641]: E1212 17:36:03.559546 1641 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:36:03.562016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:36:03.562135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:36:03.562428 systemd[1]: kubelet.service: Consumed 758ms CPU time, 255.7M memory peak. Dec 12 17:36:07.464639 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:36:07.466142 systemd[1]: Started sshd@0-10.0.0.78:22-10.0.0.1:44112.service - OpenSSH per-connection server daemon (10.0.0.1:44112). Dec 12 17:36:07.543784 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 44112 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:07.545675 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:07.551377 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:36:07.552823 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:36:07.557861 systemd-logind[1512]: New session 1 of user core. Dec 12 17:36:07.574862 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:36:07.577278 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:36:07.591119 (systemd)[1660]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:36:07.593196 systemd-logind[1512]: New session c1 of user core. Dec 12 17:36:07.704055 systemd[1660]: Queued start job for default target default.target. Dec 12 17:36:07.726230 systemd[1660]: Created slice app.slice - User Application Slice. Dec 12 17:36:07.726285 systemd[1660]: Reached target paths.target - Paths. Dec 12 17:36:07.726326 systemd[1660]: Reached target timers.target - Timers. Dec 12 17:36:07.727501 systemd[1660]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:36:07.736641 systemd[1660]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:36:07.736705 systemd[1660]: Reached target sockets.target - Sockets. Dec 12 17:36:07.736756 systemd[1660]: Reached target basic.target - Basic System. Dec 12 17:36:07.736784 systemd[1660]: Reached target default.target - Main User Target. Dec 12 17:36:07.736810 systemd[1660]: Startup finished in 138ms. Dec 12 17:36:07.736904 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:36:07.738176 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:36:07.801520 systemd[1]: Started sshd@1-10.0.0.78:22-10.0.0.1:44114.service - OpenSSH per-connection server daemon (10.0.0.1:44114). Dec 12 17:36:07.861213 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 44114 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:07.861693 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:07.865461 systemd-logind[1512]: New session 2 of user core. Dec 12 17:36:07.875444 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:36:07.930312 sshd[1674]: Connection closed by 10.0.0.1 port 44114 Dec 12 17:36:07.930232 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:07.957268 systemd[1]: sshd@1-10.0.0.78:22-10.0.0.1:44114.service: Deactivated successfully. Dec 12 17:36:07.959550 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:36:07.960298 systemd-logind[1512]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:36:07.963459 systemd[1]: Started sshd@2-10.0.0.78:22-10.0.0.1:44116.service - OpenSSH per-connection server daemon (10.0.0.1:44116). Dec 12 17:36:07.964340 systemd-logind[1512]: Removed session 2. Dec 12 17:36:08.022866 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 44116 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:08.024126 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:08.028305 systemd-logind[1512]: New session 3 of user core. Dec 12 17:36:08.037401 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:36:08.084937 sshd[1683]: Connection closed by 10.0.0.1 port 44116 Dec 12 17:36:08.085332 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:08.096345 systemd[1]: sshd@2-10.0.0.78:22-10.0.0.1:44116.service: Deactivated successfully. Dec 12 17:36:08.099554 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:36:08.100211 systemd-logind[1512]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:36:08.102492 systemd[1]: Started sshd@3-10.0.0.78:22-10.0.0.1:44122.service - OpenSSH per-connection server daemon (10.0.0.1:44122). Dec 12 17:36:08.103281 systemd-logind[1512]: Removed session 3. Dec 12 17:36:08.154189 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 44122 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:08.155483 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:08.159294 systemd-logind[1512]: New session 4 of user core. Dec 12 17:36:08.177424 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:36:08.228339 sshd[1692]: Connection closed by 10.0.0.1 port 44122 Dec 12 17:36:08.228410 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:08.240260 systemd[1]: sshd@3-10.0.0.78:22-10.0.0.1:44122.service: Deactivated successfully. Dec 12 17:36:08.241866 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:36:08.243778 systemd-logind[1512]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:36:08.245952 systemd[1]: Started sshd@4-10.0.0.78:22-10.0.0.1:44134.service - OpenSSH per-connection server daemon (10.0.0.1:44134). Dec 12 17:36:08.246387 systemd-logind[1512]: Removed session 4. Dec 12 17:36:08.312531 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 44134 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:08.314609 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:08.320211 systemd-logind[1512]: New session 5 of user core. Dec 12 17:36:08.334472 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:36:08.392333 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:36:08.392595 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:36:08.406097 sudo[1702]: pam_unix(sudo:session): session closed for user root Dec 12 17:36:08.408993 sshd[1701]: Connection closed by 10.0.0.1 port 44134 Dec 12 17:36:08.409585 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:08.418184 systemd[1]: sshd@4-10.0.0.78:22-10.0.0.1:44134.service: Deactivated successfully. Dec 12 17:36:08.419663 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:36:08.420395 systemd-logind[1512]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:36:08.423452 systemd[1]: Started sshd@5-10.0.0.78:22-10.0.0.1:44142.service - OpenSSH per-connection server daemon (10.0.0.1:44142). Dec 12 17:36:08.424360 systemd-logind[1512]: Removed session 5. Dec 12 17:36:08.481023 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 44142 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:08.482442 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:08.487314 systemd-logind[1512]: New session 6 of user core. Dec 12 17:36:08.495469 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:36:08.547958 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:36:08.548206 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:36:08.626342 sudo[1713]: pam_unix(sudo:session): session closed for user root Dec 12 17:36:08.631939 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:36:08.632181 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:36:08.640128 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:36:08.672313 augenrules[1735]: No rules Dec 12 17:36:08.673405 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:36:08.674359 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:36:08.675277 sudo[1712]: pam_unix(sudo:session): session closed for user root Dec 12 17:36:08.678415 sshd[1711]: Connection closed by 10.0.0.1 port 44142 Dec 12 17:36:08.679395 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:08.690189 systemd[1]: sshd@5-10.0.0.78:22-10.0.0.1:44142.service: Deactivated successfully. Dec 12 17:36:08.692502 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:36:08.693154 systemd-logind[1512]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:36:08.695219 systemd[1]: Started sshd@6-10.0.0.78:22-10.0.0.1:44148.service - OpenSSH per-connection server daemon (10.0.0.1:44148). Dec 12 17:36:08.695857 systemd-logind[1512]: Removed session 6. Dec 12 17:36:08.752220 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 44148 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:08.753670 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:08.758249 systemd-logind[1512]: New session 7 of user core. Dec 12 17:36:08.774432 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:36:08.840336 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:36:08.840627 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:36:09.129205 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:36:09.152659 (dockerd)[1769]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:36:09.367043 dockerd[1769]: time="2025-12-12T17:36:09.366988714Z" level=info msg="Starting up" Dec 12 17:36:09.367938 dockerd[1769]: time="2025-12-12T17:36:09.367912532Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:36:09.377836 dockerd[1769]: time="2025-12-12T17:36:09.377783981Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:36:09.411511 dockerd[1769]: time="2025-12-12T17:36:09.411391796Z" level=info msg="Loading containers: start." Dec 12 17:36:09.421300 kernel: Initializing XFRM netlink socket Dec 12 17:36:09.636547 systemd-networkd[1444]: docker0: Link UP Dec 12 17:36:09.639864 dockerd[1769]: time="2025-12-12T17:36:09.639817158Z" level=info msg="Loading containers: done." Dec 12 17:36:09.656103 dockerd[1769]: time="2025-12-12T17:36:09.655761931Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:36:09.656103 dockerd[1769]: time="2025-12-12T17:36:09.655852068Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:36:09.656103 dockerd[1769]: time="2025-12-12T17:36:09.655942843Z" level=info msg="Initializing buildkit" Dec 12 17:36:09.677533 dockerd[1769]: time="2025-12-12T17:36:09.677441405Z" level=info msg="Completed buildkit initialization" Dec 12 17:36:09.686078 dockerd[1769]: time="2025-12-12T17:36:09.686022706Z" level=info msg="Daemon has completed initialization" Dec 12 17:36:09.686630 dockerd[1769]: time="2025-12-12T17:36:09.686109973Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:36:09.686274 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:36:10.194739 containerd[1539]: time="2025-12-12T17:36:10.194701582Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 12 17:36:10.695986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2409747305.mount: Deactivated successfully. Dec 12 17:36:11.763910 containerd[1539]: time="2025-12-12T17:36:11.763840958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:11.765070 containerd[1539]: time="2025-12-12T17:36:11.764848130Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=26431961" Dec 12 17:36:11.765974 containerd[1539]: time="2025-12-12T17:36:11.765942439Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:11.768483 containerd[1539]: time="2025-12-12T17:36:11.768443613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:11.769849 containerd[1539]: time="2025-12-12T17:36:11.769551686Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 1.574809459s" Dec 12 17:36:11.769849 containerd[1539]: time="2025-12-12T17:36:11.769600600Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 12 17:36:11.770230 containerd[1539]: time="2025-12-12T17:36:11.770184101Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 12 17:36:12.905471 containerd[1539]: time="2025-12-12T17:36:12.905425305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:12.906824 containerd[1539]: time="2025-12-12T17:36:12.906790996Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22618957" Dec 12 17:36:12.907874 containerd[1539]: time="2025-12-12T17:36:12.907813658Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:12.911131 containerd[1539]: time="2025-12-12T17:36:12.911087580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:12.912346 containerd[1539]: time="2025-12-12T17:36:12.912312787Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.142079648s" Dec 12 17:36:12.912423 containerd[1539]: time="2025-12-12T17:36:12.912351979Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 12 17:36:12.913333 containerd[1539]: time="2025-12-12T17:36:12.913134381Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 12 17:36:13.813816 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:36:13.815171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:36:13.949775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:36:13.954089 (kubelet)[2058]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:36:14.113446 containerd[1539]: time="2025-12-12T17:36:14.113320003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:14.115190 containerd[1539]: time="2025-12-12T17:36:14.115104374Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17618438" Dec 12 17:36:14.118523 containerd[1539]: time="2025-12-12T17:36:14.117304910Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:14.120139 containerd[1539]: time="2025-12-12T17:36:14.120093634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:14.121116 containerd[1539]: time="2025-12-12T17:36:14.121075146Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.207908434s" Dec 12 17:36:14.121222 containerd[1539]: time="2025-12-12T17:36:14.121206760Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 12 17:36:14.122101 containerd[1539]: time="2025-12-12T17:36:14.122069077Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 12 17:36:14.142459 kubelet[2058]: E1212 17:36:14.142408 2058 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:36:14.145758 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:36:14.146038 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:36:14.146719 systemd[1]: kubelet.service: Consumed 158ms CPU time, 108.1M memory peak. Dec 12 17:36:15.113036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1274813664.mount: Deactivated successfully. Dec 12 17:36:15.492772 containerd[1539]: time="2025-12-12T17:36:15.492114031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:15.493565 containerd[1539]: time="2025-12-12T17:36:15.493527025Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27561801" Dec 12 17:36:15.494379 containerd[1539]: time="2025-12-12T17:36:15.494323107Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:15.496673 containerd[1539]: time="2025-12-12T17:36:15.496451705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:15.497038 containerd[1539]: time="2025-12-12T17:36:15.497014618Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.374904689s" Dec 12 17:36:15.497120 containerd[1539]: time="2025-12-12T17:36:15.497107638Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 12 17:36:15.497800 containerd[1539]: time="2025-12-12T17:36:15.497558121Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 12 17:36:16.127400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1135636729.mount: Deactivated successfully. Dec 12 17:36:16.966923 containerd[1539]: time="2025-12-12T17:36:16.966863794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:16.967745 containerd[1539]: time="2025-12-12T17:36:16.967716033Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Dec 12 17:36:16.969333 containerd[1539]: time="2025-12-12T17:36:16.969254568Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:16.975397 containerd[1539]: time="2025-12-12T17:36:16.974972882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:16.976705 containerd[1539]: time="2025-12-12T17:36:16.976640847Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.479054847s" Dec 12 17:36:16.976754 containerd[1539]: time="2025-12-12T17:36:16.976739757Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 12 17:36:16.977696 containerd[1539]: time="2025-12-12T17:36:16.977673967Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 17:36:17.433354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22747743.mount: Deactivated successfully. Dec 12 17:36:17.445601 containerd[1539]: time="2025-12-12T17:36:17.445537721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:36:17.447141 containerd[1539]: time="2025-12-12T17:36:17.447095568Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 12 17:36:17.448149 containerd[1539]: time="2025-12-12T17:36:17.448104926Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:36:17.450926 containerd[1539]: time="2025-12-12T17:36:17.450872339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:36:17.452250 containerd[1539]: time="2025-12-12T17:36:17.452202727Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 474.498ms" Dec 12 17:36:17.452250 containerd[1539]: time="2025-12-12T17:36:17.452236489Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 12 17:36:17.452756 containerd[1539]: time="2025-12-12T17:36:17.452710822Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 12 17:36:17.987747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1898101339.mount: Deactivated successfully. Dec 12 17:36:19.736382 containerd[1539]: time="2025-12-12T17:36:19.736320067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:19.887561 containerd[1539]: time="2025-12-12T17:36:19.887507665Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Dec 12 17:36:19.889903 containerd[1539]: time="2025-12-12T17:36:19.889842207Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:19.892793 containerd[1539]: time="2025-12-12T17:36:19.892741972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:19.894260 containerd[1539]: time="2025-12-12T17:36:19.894202844Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.441455023s" Dec 12 17:36:19.894445 containerd[1539]: time="2025-12-12T17:36:19.894346198Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 12 17:36:24.229590 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 17:36:24.231165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:36:24.386866 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:36:24.401587 (kubelet)[2216]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:36:24.440655 kubelet[2216]: E1212 17:36:24.440547 2216 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:36:24.443149 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:36:24.443310 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:36:24.443905 systemd[1]: kubelet.service: Consumed 145ms CPU time, 107.3M memory peak. Dec 12 17:36:25.947141 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:36:25.947764 systemd[1]: kubelet.service: Consumed 145ms CPU time, 107.3M memory peak. Dec 12 17:36:25.950662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:36:25.971507 systemd[1]: Reload requested from client PID 2232 ('systemctl') (unit session-7.scope)... Dec 12 17:36:25.971520 systemd[1]: Reloading... Dec 12 17:36:26.049277 zram_generator::config[2272]: No configuration found. Dec 12 17:36:26.260189 systemd[1]: Reloading finished in 288 ms. Dec 12 17:36:26.317770 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:36:26.321792 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:36:26.321996 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:36:26.322040 systemd[1]: kubelet.service: Consumed 98ms CPU time, 95.2M memory peak. Dec 12 17:36:26.323456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:36:26.474168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:36:26.478051 (kubelet)[2322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:36:26.521584 kubelet[2322]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:36:26.521584 kubelet[2322]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:36:26.521584 kubelet[2322]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:36:26.521584 kubelet[2322]: I1212 17:36:26.521559 2322 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:36:27.745760 kubelet[2322]: I1212 17:36:27.745707 2322 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 17:36:27.745760 kubelet[2322]: I1212 17:36:27.745744 2322 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:36:27.746116 kubelet[2322]: I1212 17:36:27.746015 2322 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 17:36:27.770643 kubelet[2322]: I1212 17:36:27.770414 2322 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:36:27.772842 kubelet[2322]: E1212 17:36:27.772807 2322 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.78:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:36:27.776968 kubelet[2322]: I1212 17:36:27.776946 2322 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:36:27.780260 kubelet[2322]: I1212 17:36:27.779701 2322 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:36:27.780260 kubelet[2322]: I1212 17:36:27.779937 2322 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:36:27.780260 kubelet[2322]: I1212 17:36:27.779964 2322 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:36:27.780260 kubelet[2322]: I1212 17:36:27.780210 2322 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:36:27.780459 kubelet[2322]: I1212 17:36:27.780219 2322 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 17:36:27.780459 kubelet[2322]: I1212 17:36:27.780443 2322 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:36:27.782860 kubelet[2322]: I1212 17:36:27.782828 2322 kubelet.go:446] "Attempting to sync node with API server" Dec 12 17:36:27.782913 kubelet[2322]: I1212 17:36:27.782864 2322 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:36:27.782913 kubelet[2322]: I1212 17:36:27.782890 2322 kubelet.go:352] "Adding apiserver pod source" Dec 12 17:36:27.782913 kubelet[2322]: I1212 17:36:27.782900 2322 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:36:27.786064 kubelet[2322]: W1212 17:36:27.786001 2322 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Dec 12 17:36:27.786064 kubelet[2322]: E1212 17:36:27.786069 2322 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:36:27.786064 kubelet[2322]: W1212 17:36:27.786013 2322 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Dec 12 17:36:27.786064 kubelet[2322]: E1212 17:36:27.786103 2322 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:36:27.786064 kubelet[2322]: I1212 17:36:27.786106 2322 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:36:27.787532 kubelet[2322]: I1212 17:36:27.787502 2322 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 17:36:27.787660 kubelet[2322]: W1212 17:36:27.787645 2322 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:36:27.790266 kubelet[2322]: I1212 17:36:27.788801 2322 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:36:27.790266 kubelet[2322]: I1212 17:36:27.788846 2322 server.go:1287] "Started kubelet" Dec 12 17:36:27.790266 kubelet[2322]: I1212 17:36:27.788896 2322 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:36:27.790266 kubelet[2322]: I1212 17:36:27.789736 2322 server.go:479] "Adding debug handlers to kubelet server" Dec 12 17:36:27.790266 kubelet[2322]: I1212 17:36:27.790055 2322 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:36:27.791285 kubelet[2322]: I1212 17:36:27.791213 2322 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:36:27.791580 kubelet[2322]: I1212 17:36:27.791561 2322 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:36:27.791928 kubelet[2322]: I1212 17:36:27.791907 2322 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:36:27.792079 kubelet[2322]: I1212 17:36:27.792067 2322 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:36:27.792293 kubelet[2322]: I1212 17:36:27.792278 2322 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:36:27.792804 kubelet[2322]: W1212 17:36:27.792759 2322 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Dec 12 17:36:27.792923 kubelet[2322]: E1212 17:36:27.792905 2322 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:36:27.793177 kubelet[2322]: I1212 17:36:27.793157 2322 factory.go:221] Registration of the systemd container factory successfully Dec 12 17:36:27.793351 kubelet[2322]: I1212 17:36:27.793326 2322 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:36:27.794102 kubelet[2322]: I1212 17:36:27.793543 2322 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:36:27.794218 kubelet[2322]: E1212 17:36:27.794193 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:36:27.794325 kubelet[2322]: E1212 17:36:27.794297 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="200ms" Dec 12 17:36:27.795368 kubelet[2322]: I1212 17:36:27.795348 2322 factory.go:221] Registration of the containerd container factory successfully Dec 12 17:36:27.796036 kubelet[2322]: E1212 17:36:27.795769 2322 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.78:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.78:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1880885d3270c3cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 17:36:27.788821453 +0000 UTC m=+1.307242251,LastTimestamp:2025-12-12 17:36:27.788821453 +0000 UTC m=+1.307242251,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 17:36:27.797675 kubelet[2322]: E1212 17:36:27.797652 2322 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:36:27.806533 kubelet[2322]: I1212 17:36:27.806512 2322 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:36:27.806852 kubelet[2322]: I1212 17:36:27.806612 2322 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:36:27.806852 kubelet[2322]: I1212 17:36:27.806632 2322 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:36:27.807427 kubelet[2322]: I1212 17:36:27.807056 2322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 17:36:27.808166 kubelet[2322]: I1212 17:36:27.808104 2322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 17:36:27.808166 kubelet[2322]: I1212 17:36:27.808133 2322 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 17:36:27.808166 kubelet[2322]: I1212 17:36:27.808152 2322 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:36:27.808166 kubelet[2322]: I1212 17:36:27.808158 2322 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 17:36:27.808313 kubelet[2322]: E1212 17:36:27.808193 2322 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:36:27.895330 kubelet[2322]: E1212 17:36:27.895288 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:36:27.908554 kubelet[2322]: E1212 17:36:27.908519 2322 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 17:36:27.910037 kubelet[2322]: I1212 17:36:27.910009 2322 policy_none.go:49] "None policy: Start" Dec 12 17:36:27.910037 kubelet[2322]: I1212 17:36:27.910028 2322 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:36:27.910150 kubelet[2322]: I1212 17:36:27.910050 2322 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:36:27.910657 kubelet[2322]: W1212 17:36:27.910553 2322 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Dec 12 17:36:27.910657 kubelet[2322]: E1212 17:36:27.910630 2322 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:36:27.920283 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:36:27.951177 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:36:27.954508 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:36:27.966442 kubelet[2322]: I1212 17:36:27.966365 2322 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 17:36:27.966656 kubelet[2322]: I1212 17:36:27.966586 2322 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:36:27.966656 kubelet[2322]: I1212 17:36:27.966603 2322 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:36:27.966921 kubelet[2322]: I1212 17:36:27.966903 2322 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:36:27.967756 kubelet[2322]: E1212 17:36:27.967732 2322 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:36:27.967889 kubelet[2322]: E1212 17:36:27.967859 2322 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 12 17:36:27.995452 kubelet[2322]: E1212 17:36:27.995409 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="400ms" Dec 12 17:36:28.068772 kubelet[2322]: I1212 17:36:28.068689 2322 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:36:28.069178 kubelet[2322]: E1212 17:36:28.069142 2322 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Dec 12 17:36:28.117460 systemd[1]: Created slice kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice - libcontainer container kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice. Dec 12 17:36:28.135091 kubelet[2322]: E1212 17:36:28.135039 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:36:28.138229 systemd[1]: Created slice kubepods-burstable-pod0a68423804124305a9de061f38780871.slice - libcontainer container kubepods-burstable-pod0a68423804124305a9de061f38780871.slice. Dec 12 17:36:28.140136 kubelet[2322]: E1212 17:36:28.140089 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:36:28.142309 systemd[1]: Created slice kubepods-burstable-podc734a64a2c2a3a9ca68f6121e988aebb.slice - libcontainer container kubepods-burstable-podc734a64a2c2a3a9ca68f6121e988aebb.slice. Dec 12 17:36:28.144189 kubelet[2322]: E1212 17:36:28.144146 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:36:28.195495 kubelet[2322]: I1212 17:36:28.195437 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c734a64a2c2a3a9ca68f6121e988aebb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c734a64a2c2a3a9ca68f6121e988aebb\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:36:28.195495 kubelet[2322]: I1212 17:36:28.195480 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:36:28.195637 kubelet[2322]: I1212 17:36:28.195508 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:36:28.195637 kubelet[2322]: I1212 17:36:28.195528 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:36:28.195637 kubelet[2322]: I1212 17:36:28.195546 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c734a64a2c2a3a9ca68f6121e988aebb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c734a64a2c2a3a9ca68f6121e988aebb\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:36:28.195637 kubelet[2322]: I1212 17:36:28.195561 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c734a64a2c2a3a9ca68f6121e988aebb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c734a64a2c2a3a9ca68f6121e988aebb\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:36:28.195637 kubelet[2322]: I1212 17:36:28.195585 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:36:28.195739 kubelet[2322]: I1212 17:36:28.195602 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:36:28.195739 kubelet[2322]: I1212 17:36:28.195616 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:36:28.270769 kubelet[2322]: I1212 17:36:28.270739 2322 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:36:28.271170 kubelet[2322]: E1212 17:36:28.271130 2322 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Dec 12 17:36:28.396768 kubelet[2322]: E1212 17:36:28.396656 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="800ms" Dec 12 17:36:28.436599 containerd[1539]: time="2025-12-12T17:36:28.436480510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,}" Dec 12 17:36:28.441213 containerd[1539]: time="2025-12-12T17:36:28.441180705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,}" Dec 12 17:36:28.445262 containerd[1539]: time="2025-12-12T17:36:28.445178327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c734a64a2c2a3a9ca68f6121e988aebb,Namespace:kube-system,Attempt:0,}" Dec 12 17:36:28.457066 containerd[1539]: time="2025-12-12T17:36:28.456412952Z" level=info msg="connecting to shim 62c277e31375ff1136003936ba29675bd7ecd8c770936264e88250b61cf535a5" address="unix:///run/containerd/s/46e0af34c7c599f18a9eb6877b3b36222c52e1800b9eb72807aa39b2c552636d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:36:28.466594 containerd[1539]: time="2025-12-12T17:36:28.466542909Z" level=info msg="connecting to shim 143d9079295e76d65bab3029df5c6f8fc6bfeb555b338a357a3d21c6300cabdf" address="unix:///run/containerd/s/8b638a7add10e307442d760cea31c17770b20e4d433b848c0621339cd2f839a2" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:36:28.483081 containerd[1539]: time="2025-12-12T17:36:28.483036861Z" level=info msg="connecting to shim 2ab28f250de9c2bcb86d1b85ea19edce8450225656252a9e6f1fcbd64b64653f" address="unix:///run/containerd/s/5ba30627ee92097f0b81f5aba0fbe02a688c00a9f354e569d275c3781ae5a423" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:36:28.500446 systemd[1]: Started cri-containerd-143d9079295e76d65bab3029df5c6f8fc6bfeb555b338a357a3d21c6300cabdf.scope - libcontainer container 143d9079295e76d65bab3029df5c6f8fc6bfeb555b338a357a3d21c6300cabdf. Dec 12 17:36:28.501592 systemd[1]: Started cri-containerd-62c277e31375ff1136003936ba29675bd7ecd8c770936264e88250b61cf535a5.scope - libcontainer container 62c277e31375ff1136003936ba29675bd7ecd8c770936264e88250b61cf535a5. Dec 12 17:36:28.504746 systemd[1]: Started cri-containerd-2ab28f250de9c2bcb86d1b85ea19edce8450225656252a9e6f1fcbd64b64653f.scope - libcontainer container 2ab28f250de9c2bcb86d1b85ea19edce8450225656252a9e6f1fcbd64b64653f. Dec 12 17:36:28.537118 containerd[1539]: time="2025-12-12T17:36:28.536966620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"62c277e31375ff1136003936ba29675bd7ecd8c770936264e88250b61cf535a5\"" Dec 12 17:36:28.543263 containerd[1539]: time="2025-12-12T17:36:28.543212007Z" level=info msg="CreateContainer within sandbox \"62c277e31375ff1136003936ba29675bd7ecd8c770936264e88250b61cf535a5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:36:28.551379 containerd[1539]: time="2025-12-12T17:36:28.551328817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,} returns sandbox id \"143d9079295e76d65bab3029df5c6f8fc6bfeb555b338a357a3d21c6300cabdf\"" Dec 12 17:36:28.552818 containerd[1539]: time="2025-12-12T17:36:28.552785591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c734a64a2c2a3a9ca68f6121e988aebb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ab28f250de9c2bcb86d1b85ea19edce8450225656252a9e6f1fcbd64b64653f\"" Dec 12 17:36:28.553483 containerd[1539]: time="2025-12-12T17:36:28.553071796Z" level=info msg="Container 4cc03fdc84cedda6bd4673dc0d4d1a385625cfbe724f30a54c0bb530341f98d2: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:28.554336 containerd[1539]: time="2025-12-12T17:36:28.554293232Z" level=info msg="CreateContainer within sandbox \"143d9079295e76d65bab3029df5c6f8fc6bfeb555b338a357a3d21c6300cabdf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:36:28.554684 containerd[1539]: time="2025-12-12T17:36:28.554656536Z" level=info msg="CreateContainer within sandbox \"2ab28f250de9c2bcb86d1b85ea19edce8450225656252a9e6f1fcbd64b64653f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:36:28.564665 containerd[1539]: time="2025-12-12T17:36:28.564611740Z" level=info msg="Container d18be4b6d9f2ad1ff6bfcaeae0c352fd36deeab0091a69ae92242c0d6d5cd9fb: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:28.566680 containerd[1539]: time="2025-12-12T17:36:28.566630405Z" level=info msg="CreateContainer within sandbox \"62c277e31375ff1136003936ba29675bd7ecd8c770936264e88250b61cf535a5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4cc03fdc84cedda6bd4673dc0d4d1a385625cfbe724f30a54c0bb530341f98d2\"" Dec 12 17:36:28.567246 containerd[1539]: time="2025-12-12T17:36:28.567215730Z" level=info msg="StartContainer for \"4cc03fdc84cedda6bd4673dc0d4d1a385625cfbe724f30a54c0bb530341f98d2\"" Dec 12 17:36:28.567302 containerd[1539]: time="2025-12-12T17:36:28.567238644Z" level=info msg="Container 07a98f9600bb9cb5ac98dafa7c15af33e68d3bfa48eeb201777ce1c3f8f3f480: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:28.568376 containerd[1539]: time="2025-12-12T17:36:28.568332195Z" level=info msg="connecting to shim 4cc03fdc84cedda6bd4673dc0d4d1a385625cfbe724f30a54c0bb530341f98d2" address="unix:///run/containerd/s/46e0af34c7c599f18a9eb6877b3b36222c52e1800b9eb72807aa39b2c552636d" protocol=ttrpc version=3 Dec 12 17:36:28.575165 containerd[1539]: time="2025-12-12T17:36:28.575088885Z" level=info msg="CreateContainer within sandbox \"143d9079295e76d65bab3029df5c6f8fc6bfeb555b338a357a3d21c6300cabdf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d18be4b6d9f2ad1ff6bfcaeae0c352fd36deeab0091a69ae92242c0d6d5cd9fb\"" Dec 12 17:36:28.577231 containerd[1539]: time="2025-12-12T17:36:28.577010816Z" level=info msg="StartContainer for \"d18be4b6d9f2ad1ff6bfcaeae0c352fd36deeab0091a69ae92242c0d6d5cd9fb\"" Dec 12 17:36:28.578229 containerd[1539]: time="2025-12-12T17:36:28.578159992Z" level=info msg="connecting to shim d18be4b6d9f2ad1ff6bfcaeae0c352fd36deeab0091a69ae92242c0d6d5cd9fb" address="unix:///run/containerd/s/8b638a7add10e307442d760cea31c17770b20e4d433b848c0621339cd2f839a2" protocol=ttrpc version=3 Dec 12 17:36:28.580148 containerd[1539]: time="2025-12-12T17:36:28.580106917Z" level=info msg="CreateContainer within sandbox \"2ab28f250de9c2bcb86d1b85ea19edce8450225656252a9e6f1fcbd64b64653f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"07a98f9600bb9cb5ac98dafa7c15af33e68d3bfa48eeb201777ce1c3f8f3f480\"" Dec 12 17:36:28.580606 containerd[1539]: time="2025-12-12T17:36:28.580532524Z" level=info msg="StartContainer for \"07a98f9600bb9cb5ac98dafa7c15af33e68d3bfa48eeb201777ce1c3f8f3f480\"" Dec 12 17:36:28.582803 containerd[1539]: time="2025-12-12T17:36:28.582769771Z" level=info msg="connecting to shim 07a98f9600bb9cb5ac98dafa7c15af33e68d3bfa48eeb201777ce1c3f8f3f480" address="unix:///run/containerd/s/5ba30627ee92097f0b81f5aba0fbe02a688c00a9f354e569d275c3781ae5a423" protocol=ttrpc version=3 Dec 12 17:36:28.587431 systemd[1]: Started cri-containerd-4cc03fdc84cedda6bd4673dc0d4d1a385625cfbe724f30a54c0bb530341f98d2.scope - libcontainer container 4cc03fdc84cedda6bd4673dc0d4d1a385625cfbe724f30a54c0bb530341f98d2. Dec 12 17:36:28.603437 systemd[1]: Started cri-containerd-d18be4b6d9f2ad1ff6bfcaeae0c352fd36deeab0091a69ae92242c0d6d5cd9fb.scope - libcontainer container d18be4b6d9f2ad1ff6bfcaeae0c352fd36deeab0091a69ae92242c0d6d5cd9fb. Dec 12 17:36:28.606793 systemd[1]: Started cri-containerd-07a98f9600bb9cb5ac98dafa7c15af33e68d3bfa48eeb201777ce1c3f8f3f480.scope - libcontainer container 07a98f9600bb9cb5ac98dafa7c15af33e68d3bfa48eeb201777ce1c3f8f3f480. Dec 12 17:36:28.660075 containerd[1539]: time="2025-12-12T17:36:28.659927779Z" level=info msg="StartContainer for \"4cc03fdc84cedda6bd4673dc0d4d1a385625cfbe724f30a54c0bb530341f98d2\" returns successfully" Dec 12 17:36:28.661481 containerd[1539]: time="2025-12-12T17:36:28.661443418Z" level=info msg="StartContainer for \"07a98f9600bb9cb5ac98dafa7c15af33e68d3bfa48eeb201777ce1c3f8f3f480\" returns successfully" Dec 12 17:36:28.662256 containerd[1539]: time="2025-12-12T17:36:28.662204976Z" level=info msg="StartContainer for \"d18be4b6d9f2ad1ff6bfcaeae0c352fd36deeab0091a69ae92242c0d6d5cd9fb\" returns successfully" Dec 12 17:36:28.673348 kubelet[2322]: I1212 17:36:28.673304 2322 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:36:28.673664 kubelet[2322]: E1212 17:36:28.673630 2322 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Dec 12 17:36:28.818775 kubelet[2322]: E1212 17:36:28.818746 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:36:28.821580 kubelet[2322]: E1212 17:36:28.821437 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:36:28.824339 kubelet[2322]: E1212 17:36:28.824315 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:36:29.477648 kubelet[2322]: I1212 17:36:29.477347 2322 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:36:29.828794 kubelet[2322]: E1212 17:36:29.828764 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:36:29.829657 kubelet[2322]: E1212 17:36:29.829637 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:36:30.114876 kubelet[2322]: E1212 17:36:30.114770 2322 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 12 17:36:30.193091 kubelet[2322]: I1212 17:36:30.193050 2322 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:36:30.196180 kubelet[2322]: I1212 17:36:30.196153 2322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:36:30.212254 kubelet[2322]: E1212 17:36:30.210878 2322 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:36:30.212254 kubelet[2322]: I1212 17:36:30.210916 2322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:36:30.215700 kubelet[2322]: E1212 17:36:30.215437 2322 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 17:36:30.215700 kubelet[2322]: I1212 17:36:30.215464 2322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:36:30.217227 kubelet[2322]: E1212 17:36:30.217204 2322 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 17:36:30.783910 kubelet[2322]: I1212 17:36:30.783849 2322 apiserver.go:52] "Watching apiserver" Dec 12 17:36:30.792964 kubelet[2322]: I1212 17:36:30.792836 2322 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:36:30.828054 kubelet[2322]: I1212 17:36:30.828023 2322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:36:30.830372 kubelet[2322]: E1212 17:36:30.830324 2322 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 17:36:31.053690 kubelet[2322]: I1212 17:36:31.053579 2322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:36:31.057162 kubelet[2322]: E1212 17:36:31.057126 2322 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:36:32.438554 systemd[1]: Reload requested from client PID 2594 ('systemctl') (unit session-7.scope)... Dec 12 17:36:32.438571 systemd[1]: Reloading... Dec 12 17:36:32.512275 zram_generator::config[2637]: No configuration found. Dec 12 17:36:32.689790 systemd[1]: Reloading finished in 250 ms. Dec 12 17:36:32.715380 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:36:32.729209 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:36:32.729523 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:36:32.729583 systemd[1]: kubelet.service: Consumed 1.667s CPU time, 128.1M memory peak. Dec 12 17:36:32.732082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:36:32.885286 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:36:32.898663 (kubelet)[2679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:36:32.950414 kubelet[2679]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:36:32.950414 kubelet[2679]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:36:32.950414 kubelet[2679]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:36:32.950776 kubelet[2679]: I1212 17:36:32.950395 2679 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:36:32.957311 kubelet[2679]: I1212 17:36:32.956535 2679 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 17:36:32.957311 kubelet[2679]: I1212 17:36:32.956570 2679 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:36:32.957674 kubelet[2679]: I1212 17:36:32.957650 2679 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 17:36:32.959496 kubelet[2679]: I1212 17:36:32.959463 2679 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 17:36:32.961801 kubelet[2679]: I1212 17:36:32.961764 2679 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:36:32.965912 kubelet[2679]: I1212 17:36:32.965885 2679 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:36:32.969248 kubelet[2679]: I1212 17:36:32.969192 2679 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:36:32.969547 kubelet[2679]: I1212 17:36:32.969502 2679 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:36:32.969731 kubelet[2679]: I1212 17:36:32.969536 2679 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:36:32.969815 kubelet[2679]: I1212 17:36:32.969736 2679 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:36:32.969815 kubelet[2679]: I1212 17:36:32.969745 2679 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 17:36:32.969815 kubelet[2679]: I1212 17:36:32.969790 2679 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:36:32.969949 kubelet[2679]: I1212 17:36:32.969935 2679 kubelet.go:446] "Attempting to sync node with API server" Dec 12 17:36:32.969974 kubelet[2679]: I1212 17:36:32.969958 2679 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:36:32.970709 kubelet[2679]: I1212 17:36:32.970652 2679 kubelet.go:352] "Adding apiserver pod source" Dec 12 17:36:32.970709 kubelet[2679]: I1212 17:36:32.970699 2679 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:36:32.972743 kubelet[2679]: I1212 17:36:32.972704 2679 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:36:32.973696 kubelet[2679]: I1212 17:36:32.973675 2679 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 17:36:32.974811 kubelet[2679]: I1212 17:36:32.974777 2679 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:36:32.974811 kubelet[2679]: I1212 17:36:32.974818 2679 server.go:1287] "Started kubelet" Dec 12 17:36:32.975926 kubelet[2679]: I1212 17:36:32.975853 2679 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:36:32.976317 kubelet[2679]: I1212 17:36:32.976294 2679 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:36:32.976472 kubelet[2679]: I1212 17:36:32.976445 2679 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:36:32.977030 kubelet[2679]: I1212 17:36:32.976997 2679 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:36:32.977959 kubelet[2679]: I1212 17:36:32.977923 2679 server.go:479] "Adding debug handlers to kubelet server" Dec 12 17:36:32.982861 kubelet[2679]: E1212 17:36:32.982562 2679 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:36:32.983003 kubelet[2679]: I1212 17:36:32.982978 2679 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:36:32.984504 kubelet[2679]: I1212 17:36:32.984481 2679 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:36:32.984782 kubelet[2679]: I1212 17:36:32.984743 2679 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:36:32.984918 kubelet[2679]: I1212 17:36:32.984904 2679 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:36:32.989033 kubelet[2679]: I1212 17:36:32.988994 2679 factory.go:221] Registration of the systemd container factory successfully Dec 12 17:36:32.989168 kubelet[2679]: I1212 17:36:32.989135 2679 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:36:32.991112 kubelet[2679]: I1212 17:36:32.991077 2679 factory.go:221] Registration of the containerd container factory successfully Dec 12 17:36:32.996384 kubelet[2679]: E1212 17:36:32.996339 2679 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:36:33.000089 kubelet[2679]: I1212 17:36:32.999915 2679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 17:36:33.000992 kubelet[2679]: I1212 17:36:33.000971 2679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 17:36:33.001089 kubelet[2679]: I1212 17:36:33.001078 2679 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 17:36:33.001164 kubelet[2679]: I1212 17:36:33.001152 2679 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:36:33.001208 kubelet[2679]: I1212 17:36:33.001200 2679 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 17:36:33.001329 kubelet[2679]: E1212 17:36:33.001309 2679 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:36:33.040413 kubelet[2679]: I1212 17:36:33.040383 2679 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:36:33.040413 kubelet[2679]: I1212 17:36:33.040403 2679 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:36:33.040413 kubelet[2679]: I1212 17:36:33.040424 2679 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:36:33.040623 kubelet[2679]: I1212 17:36:33.040587 2679 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:36:33.040623 kubelet[2679]: I1212 17:36:33.040598 2679 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:36:33.040623 kubelet[2679]: I1212 17:36:33.040616 2679 policy_none.go:49] "None policy: Start" Dec 12 17:36:33.040623 kubelet[2679]: I1212 17:36:33.040625 2679 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:36:33.040718 kubelet[2679]: I1212 17:36:33.040634 2679 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:36:33.040739 kubelet[2679]: I1212 17:36:33.040724 2679 state_mem.go:75] "Updated machine memory state" Dec 12 17:36:33.045332 kubelet[2679]: I1212 17:36:33.044519 2679 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 17:36:33.045332 kubelet[2679]: I1212 17:36:33.044686 2679 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:36:33.045332 kubelet[2679]: I1212 17:36:33.044697 2679 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:36:33.045714 kubelet[2679]: I1212 17:36:33.045237 2679 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:36:33.046500 kubelet[2679]: E1212 17:36:33.046351 2679 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:36:33.102218 kubelet[2679]: I1212 17:36:33.102183 2679 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:36:33.102613 kubelet[2679]: I1212 17:36:33.102354 2679 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:36:33.102796 kubelet[2679]: I1212 17:36:33.102512 2679 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:36:33.146708 kubelet[2679]: I1212 17:36:33.146677 2679 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:36:33.155038 kubelet[2679]: I1212 17:36:33.155009 2679 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 12 17:36:33.155264 kubelet[2679]: I1212 17:36:33.155251 2679 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:36:33.286839 kubelet[2679]: I1212 17:36:33.286791 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:36:33.286839 kubelet[2679]: I1212 17:36:33.286833 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:36:33.286839 kubelet[2679]: I1212 17:36:33.286853 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:36:33.287046 kubelet[2679]: I1212 17:36:33.286874 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:36:33.287046 kubelet[2679]: I1212 17:36:33.286890 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c734a64a2c2a3a9ca68f6121e988aebb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c734a64a2c2a3a9ca68f6121e988aebb\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:36:33.287046 kubelet[2679]: I1212 17:36:33.286925 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:36:33.287046 kubelet[2679]: I1212 17:36:33.286946 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:36:33.287046 kubelet[2679]: I1212 17:36:33.286979 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c734a64a2c2a3a9ca68f6121e988aebb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c734a64a2c2a3a9ca68f6121e988aebb\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:36:33.287144 kubelet[2679]: I1212 17:36:33.287022 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c734a64a2c2a3a9ca68f6121e988aebb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c734a64a2c2a3a9ca68f6121e988aebb\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:36:33.418703 sudo[2714]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 12 17:36:33.419067 sudo[2714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 12 17:36:33.745989 sudo[2714]: pam_unix(sudo:session): session closed for user root Dec 12 17:36:33.971893 kubelet[2679]: I1212 17:36:33.971841 2679 apiserver.go:52] "Watching apiserver" Dec 12 17:36:33.985252 kubelet[2679]: I1212 17:36:33.985201 2679 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:36:34.019074 kubelet[2679]: I1212 17:36:34.018843 2679 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:36:34.019074 kubelet[2679]: I1212 17:36:34.018853 2679 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:36:34.145827 kubelet[2679]: E1212 17:36:34.145493 2679 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:36:34.146677 kubelet[2679]: E1212 17:36:34.146629 2679 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 12 17:36:34.241878 kubelet[2679]: I1212 17:36:34.241803 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.2417567919999999 podStartE2EDuration="1.241756792s" podCreationTimestamp="2025-12-12 17:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:36:34.241754712 +0000 UTC m=+1.339329332" watchObservedRunningTime="2025-12-12 17:36:34.241756792 +0000 UTC m=+1.339331372" Dec 12 17:36:34.242021 kubelet[2679]: I1212 17:36:34.241948 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.241942944 podStartE2EDuration="1.241942944s" podCreationTimestamp="2025-12-12 17:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:36:34.146918584 +0000 UTC m=+1.244493244" watchObservedRunningTime="2025-12-12 17:36:34.241942944 +0000 UTC m=+1.339517564" Dec 12 17:36:34.504092 kubelet[2679]: I1212 17:36:34.504023 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5039695100000001 podStartE2EDuration="1.50396951s" podCreationTimestamp="2025-12-12 17:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:36:34.308302246 +0000 UTC m=+1.405876866" watchObservedRunningTime="2025-12-12 17:36:34.50396951 +0000 UTC m=+1.601544130" Dec 12 17:36:35.619340 sudo[1748]: pam_unix(sudo:session): session closed for user root Dec 12 17:36:35.620832 sshd[1747]: Connection closed by 10.0.0.1 port 44148 Dec 12 17:36:35.621425 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:35.625441 systemd[1]: sshd@6-10.0.0.78:22-10.0.0.1:44148.service: Deactivated successfully. Dec 12 17:36:35.629888 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:36:35.631309 systemd[1]: session-7.scope: Consumed 8.327s CPU time, 259.2M memory peak. Dec 12 17:36:35.636404 systemd-logind[1512]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:36:35.637856 systemd-logind[1512]: Removed session 7. Dec 12 17:36:39.224084 kubelet[2679]: I1212 17:36:39.224048 2679 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:36:39.225122 kubelet[2679]: I1212 17:36:39.224498 2679 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:36:39.225171 containerd[1539]: time="2025-12-12T17:36:39.224330398Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:36:40.128607 systemd[1]: Created slice kubepods-besteffort-podde04a2a5_1190_4177_98a1_af2b0ab6be7a.slice - libcontainer container kubepods-besteffort-podde04a2a5_1190_4177_98a1_af2b0ab6be7a.slice. Dec 12 17:36:40.129899 kubelet[2679]: I1212 17:36:40.129870 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de04a2a5-1190-4177-98a1-af2b0ab6be7a-lib-modules\") pod \"kube-proxy-tpgtw\" (UID: \"de04a2a5-1190-4177-98a1-af2b0ab6be7a\") " pod="kube-system/kube-proxy-tpgtw" Dec 12 17:36:40.129899 kubelet[2679]: I1212 17:36:40.129901 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-cilium-run\") pod \"cilium-9kgfg\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " pod="kube-system/cilium-9kgfg" Dec 12 17:36:40.130285 kubelet[2679]: I1212 17:36:40.129917 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de04a2a5-1190-4177-98a1-af2b0ab6be7a-xtables-lock\") pod \"kube-proxy-tpgtw\" (UID: \"de04a2a5-1190-4177-98a1-af2b0ab6be7a\") " pod="kube-system/kube-proxy-tpgtw" Dec 12 17:36:40.130285 kubelet[2679]: I1212 17:36:40.129932 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-cilium-cgroup\") pod \"cilium-9kgfg\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " pod="kube-system/cilium-9kgfg" Dec 12 17:36:40.130285 kubelet[2679]: I1212 17:36:40.129946 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-lib-modules\") pod \"cilium-9kgfg\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " pod="kube-system/cilium-9kgfg" Dec 12 17:36:40.130285 kubelet[2679]: I1212 17:36:40.129962 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-host-proc-sys-kernel\") pod \"cilium-9kgfg\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " pod="kube-system/cilium-9kgfg" Dec 12 17:36:40.130285 kubelet[2679]: I1212 17:36:40.129979 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-bpf-maps\") pod \"cilium-9kgfg\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " pod="kube-system/cilium-9kgfg" Dec 12 17:36:40.130285 kubelet[2679]: I1212 17:36:40.129993 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-hostproc\") pod \"cilium-9kgfg\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " pod="kube-system/cilium-9kgfg" Dec 12 17:36:40.130434 kubelet[2679]: I1212 17:36:40.130008 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-host-proc-sys-net\") pod \"cilium-9kgfg\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " pod="kube-system/cilium-9kgfg" Dec 12 17:36:40.130434 kubelet[2679]: I1212 17:36:40.130023 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16a452c7-d7af-4f39-96f2-acbbafd66d28-hubble-tls\") pod \"cilium-9kgfg\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " pod="kube-system/cilium-9kgfg" Dec 12 17:36:40.130434 kubelet[2679]: I1212 17:36:40.130040 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-xtables-lock\") pod \"cilium-9kgfg\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " pod="kube-system/cilium-9kgfg" Dec 12 17:36:40.130434 kubelet[2679]: I1212 17:36:40.130056 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsnmk\" (UniqueName: \"kubernetes.io/projected/16a452c7-d7af-4f39-96f2-acbbafd66d28-kube-api-access-tsnmk\") pod \"cilium-9kgfg\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " pod="kube-system/cilium-9kgfg" Dec 12 17:36:40.130434 kubelet[2679]: I1212 17:36:40.130071 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/de04a2a5-1190-4177-98a1-af2b0ab6be7a-kube-proxy\") pod \"kube-proxy-tpgtw\" (UID: \"de04a2a5-1190-4177-98a1-af2b0ab6be7a\") " pod="kube-system/kube-proxy-tpgtw" Dec 12 17:36:40.130528 kubelet[2679]: I1212 17:36:40.130087 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cldl2\" (UniqueName: \"kubernetes.io/projected/de04a2a5-1190-4177-98a1-af2b0ab6be7a-kube-api-access-cldl2\") pod \"kube-proxy-tpgtw\" (UID: \"de04a2a5-1190-4177-98a1-af2b0ab6be7a\") " pod="kube-system/kube-proxy-tpgtw" Dec 12 17:36:40.130528 kubelet[2679]: I1212 17:36:40.130101 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-cni-path\") pod \"cilium-9kgfg\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " pod="kube-system/cilium-9kgfg" Dec 12 17:36:40.130528 kubelet[2679]: I1212 17:36:40.130122 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-etc-cni-netd\") pod \"cilium-9kgfg\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " pod="kube-system/cilium-9kgfg" Dec 12 17:36:40.130528 kubelet[2679]: I1212 17:36:40.130138 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16a452c7-d7af-4f39-96f2-acbbafd66d28-clustermesh-secrets\") pod \"cilium-9kgfg\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " pod="kube-system/cilium-9kgfg" Dec 12 17:36:40.130528 kubelet[2679]: I1212 17:36:40.130157 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16a452c7-d7af-4f39-96f2-acbbafd66d28-cilium-config-path\") pod \"cilium-9kgfg\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " pod="kube-system/cilium-9kgfg" Dec 12 17:36:40.142186 systemd[1]: Created slice kubepods-burstable-pod16a452c7_d7af_4f39_96f2_acbbafd66d28.slice - libcontainer container kubepods-burstable-pod16a452c7_d7af_4f39_96f2_acbbafd66d28.slice. Dec 12 17:36:40.344620 systemd[1]: Created slice kubepods-besteffort-podd48cebf7_fd63_4ddd_8d60_10b990c30aca.slice - libcontainer container kubepods-besteffort-podd48cebf7_fd63_4ddd_8d60_10b990c30aca.slice. Dec 12 17:36:40.431730 kubelet[2679]: I1212 17:36:40.431581 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d48cebf7-fd63-4ddd-8d60-10b990c30aca-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-nmfhk\" (UID: \"d48cebf7-fd63-4ddd-8d60-10b990c30aca\") " pod="kube-system/cilium-operator-6c4d7847fc-nmfhk" Dec 12 17:36:40.431730 kubelet[2679]: I1212 17:36:40.431638 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2g5r\" (UniqueName: \"kubernetes.io/projected/d48cebf7-fd63-4ddd-8d60-10b990c30aca-kube-api-access-h2g5r\") pod \"cilium-operator-6c4d7847fc-nmfhk\" (UID: \"d48cebf7-fd63-4ddd-8d60-10b990c30aca\") " pod="kube-system/cilium-operator-6c4d7847fc-nmfhk" Dec 12 17:36:40.440575 containerd[1539]: time="2025-12-12T17:36:40.440528425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tpgtw,Uid:de04a2a5-1190-4177-98a1-af2b0ab6be7a,Namespace:kube-system,Attempt:0,}" Dec 12 17:36:40.445343 containerd[1539]: time="2025-12-12T17:36:40.445300990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9kgfg,Uid:16a452c7-d7af-4f39-96f2-acbbafd66d28,Namespace:kube-system,Attempt:0,}" Dec 12 17:36:40.457559 containerd[1539]: time="2025-12-12T17:36:40.457519153Z" level=info msg="connecting to shim cbdd8fb13085c44be015b09ff0318cb4e02162aa1adeaf41d2132aff6304c8cf" address="unix:///run/containerd/s/46e749326bab04493422cf195f608ba14c611447fefbe473792f86dcb15fae16" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:36:40.469549 containerd[1539]: time="2025-12-12T17:36:40.469511363Z" level=info msg="connecting to shim 0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190" address="unix:///run/containerd/s/0836ebe2f3baabdee55899d5bbc6dc97abc41c96b852441b3248cf52333df6c3" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:36:40.483436 systemd[1]: Started cri-containerd-cbdd8fb13085c44be015b09ff0318cb4e02162aa1adeaf41d2132aff6304c8cf.scope - libcontainer container cbdd8fb13085c44be015b09ff0318cb4e02162aa1adeaf41d2132aff6304c8cf. Dec 12 17:36:40.489638 systemd[1]: Started cri-containerd-0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190.scope - libcontainer container 0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190. Dec 12 17:36:40.512970 containerd[1539]: time="2025-12-12T17:36:40.512923153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tpgtw,Uid:de04a2a5-1190-4177-98a1-af2b0ab6be7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbdd8fb13085c44be015b09ff0318cb4e02162aa1adeaf41d2132aff6304c8cf\"" Dec 12 17:36:40.516750 containerd[1539]: time="2025-12-12T17:36:40.516702110Z" level=info msg="CreateContainer within sandbox \"cbdd8fb13085c44be015b09ff0318cb4e02162aa1adeaf41d2132aff6304c8cf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:36:40.520566 containerd[1539]: time="2025-12-12T17:36:40.520532146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9kgfg,Uid:16a452c7-d7af-4f39-96f2-acbbafd66d28,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190\"" Dec 12 17:36:40.522393 containerd[1539]: time="2025-12-12T17:36:40.522232850Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 12 17:36:40.527918 containerd[1539]: time="2025-12-12T17:36:40.527891027Z" level=info msg="Container 947fbb3e2b4ee70b8c515c2eac56aee688872146fe48aba41a3012e8a3c405b2: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:40.536318 containerd[1539]: time="2025-12-12T17:36:40.536184317Z" level=info msg="CreateContainer within sandbox \"cbdd8fb13085c44be015b09ff0318cb4e02162aa1adeaf41d2132aff6304c8cf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"947fbb3e2b4ee70b8c515c2eac56aee688872146fe48aba41a3012e8a3c405b2\"" Dec 12 17:36:40.537298 containerd[1539]: time="2025-12-12T17:36:40.537270682Z" level=info msg="StartContainer for \"947fbb3e2b4ee70b8c515c2eac56aee688872146fe48aba41a3012e8a3c405b2\"" Dec 12 17:36:40.541701 containerd[1539]: time="2025-12-12T17:36:40.541670699Z" level=info msg="connecting to shim 947fbb3e2b4ee70b8c515c2eac56aee688872146fe48aba41a3012e8a3c405b2" address="unix:///run/containerd/s/46e749326bab04493422cf195f608ba14c611447fefbe473792f86dcb15fae16" protocol=ttrpc version=3 Dec 12 17:36:40.559447 systemd[1]: Started cri-containerd-947fbb3e2b4ee70b8c515c2eac56aee688872146fe48aba41a3012e8a3c405b2.scope - libcontainer container 947fbb3e2b4ee70b8c515c2eac56aee688872146fe48aba41a3012e8a3c405b2. Dec 12 17:36:40.642743 containerd[1539]: time="2025-12-12T17:36:40.642706216Z" level=info msg="StartContainer for \"947fbb3e2b4ee70b8c515c2eac56aee688872146fe48aba41a3012e8a3c405b2\" returns successfully" Dec 12 17:36:40.650264 containerd[1539]: time="2025-12-12T17:36:40.650218572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nmfhk,Uid:d48cebf7-fd63-4ddd-8d60-10b990c30aca,Namespace:kube-system,Attempt:0,}" Dec 12 17:36:40.665792 containerd[1539]: time="2025-12-12T17:36:40.665743228Z" level=info msg="connecting to shim 4b51736586e2eb0798242df2f997bf175faa0f81d82d3d43991e0e1587c96d67" address="unix:///run/containerd/s/3b314147efa3d0243bd0f888ff80cbfab58279143d043a3847e6b15bbef4e883" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:36:40.695453 systemd[1]: Started cri-containerd-4b51736586e2eb0798242df2f997bf175faa0f81d82d3d43991e0e1587c96d67.scope - libcontainer container 4b51736586e2eb0798242df2f997bf175faa0f81d82d3d43991e0e1587c96d67. Dec 12 17:36:40.732797 containerd[1539]: time="2025-12-12T17:36:40.732743051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nmfhk,Uid:d48cebf7-fd63-4ddd-8d60-10b990c30aca,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b51736586e2eb0798242df2f997bf175faa0f81d82d3d43991e0e1587c96d67\"" Dec 12 17:36:41.045921 kubelet[2679]: I1212 17:36:41.045539 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tpgtw" podStartSLOduration=1.045519607 podStartE2EDuration="1.045519607s" podCreationTimestamp="2025-12-12 17:36:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:36:41.044999823 +0000 UTC m=+8.142574443" watchObservedRunningTime="2025-12-12 17:36:41.045519607 +0000 UTC m=+8.143094227" Dec 12 17:36:45.552026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3752065462.mount: Deactivated successfully. Dec 12 17:36:46.438560 update_engine[1514]: I20251212 17:36:46.437999 1514 update_attempter.cc:509] Updating boot flags... Dec 12 17:36:46.720592 containerd[1539]: time="2025-12-12T17:36:46.720473855Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:46.721078 containerd[1539]: time="2025-12-12T17:36:46.721041801Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Dec 12 17:36:46.721948 containerd[1539]: time="2025-12-12T17:36:46.721907701Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:46.723793 containerd[1539]: time="2025-12-12T17:36:46.723768017Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.201455569s" Dec 12 17:36:46.723874 containerd[1539]: time="2025-12-12T17:36:46.723796336Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 12 17:36:46.734823 containerd[1539]: time="2025-12-12T17:36:46.734787996Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 12 17:36:46.741611 containerd[1539]: time="2025-12-12T17:36:46.741491557Z" level=info msg="CreateContainer within sandbox \"0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 17:36:46.757269 containerd[1539]: time="2025-12-12T17:36:46.757204105Z" level=info msg="Container 1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:46.764169 containerd[1539]: time="2025-12-12T17:36:46.764034783Z" level=info msg="CreateContainer within sandbox \"0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200\"" Dec 12 17:36:46.766499 containerd[1539]: time="2025-12-12T17:36:46.766454206Z" level=info msg="StartContainer for \"1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200\"" Dec 12 17:36:46.768128 containerd[1539]: time="2025-12-12T17:36:46.768095127Z" level=info msg="connecting to shim 1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200" address="unix:///run/containerd/s/0836ebe2f3baabdee55899d5bbc6dc97abc41c96b852441b3248cf52333df6c3" protocol=ttrpc version=3 Dec 12 17:36:46.825478 systemd[1]: Started cri-containerd-1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200.scope - libcontainer container 1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200. Dec 12 17:36:46.865023 systemd[1]: cri-containerd-1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200.scope: Deactivated successfully. Dec 12 17:36:46.939329 containerd[1539]: time="2025-12-12T17:36:46.939291713Z" level=info msg="StartContainer for \"1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200\" returns successfully" Dec 12 17:36:46.954261 containerd[1539]: time="2025-12-12T17:36:46.954179241Z" level=info msg="received container exit event container_id:\"1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200\" id:\"1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200\" pid:3114 exited_at:{seconds:1765561006 nanos:945017658}" Dec 12 17:36:47.053005 containerd[1539]: time="2025-12-12T17:36:47.052959043Z" level=info msg="CreateContainer within sandbox \"0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 17:36:47.064422 containerd[1539]: time="2025-12-12T17:36:47.064365946Z" level=info msg="Container 93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:47.070226 containerd[1539]: time="2025-12-12T17:36:47.070186655Z" level=info msg="CreateContainer within sandbox \"0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82\"" Dec 12 17:36:47.070677 containerd[1539]: time="2025-12-12T17:36:47.070649165Z" level=info msg="StartContainer for \"93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82\"" Dec 12 17:36:47.073957 containerd[1539]: time="2025-12-12T17:36:47.073858492Z" level=info msg="connecting to shim 93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82" address="unix:///run/containerd/s/0836ebe2f3baabdee55899d5bbc6dc97abc41c96b852441b3248cf52333df6c3" protocol=ttrpc version=3 Dec 12 17:36:47.095495 systemd[1]: Started cri-containerd-93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82.scope - libcontainer container 93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82. Dec 12 17:36:47.122114 containerd[1539]: time="2025-12-12T17:36:47.122046927Z" level=info msg="StartContainer for \"93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82\" returns successfully" Dec 12 17:36:47.134501 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:36:47.135328 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:36:47.135636 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:36:47.137007 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:36:47.137614 systemd[1]: cri-containerd-93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82.scope: Deactivated successfully. Dec 12 17:36:47.141088 containerd[1539]: time="2025-12-12T17:36:47.141047500Z" level=info msg="received container exit event container_id:\"93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82\" id:\"93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82\" pid:3158 exited_at:{seconds:1765561007 nanos:140627869}" Dec 12 17:36:47.161271 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:36:47.756928 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200-rootfs.mount: Deactivated successfully. Dec 12 17:36:48.058459 containerd[1539]: time="2025-12-12T17:36:48.058342508Z" level=info msg="CreateContainer within sandbox \"0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 17:36:48.081290 containerd[1539]: time="2025-12-12T17:36:48.081218938Z" level=info msg="Container 7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:48.106915 containerd[1539]: time="2025-12-12T17:36:48.106827590Z" level=info msg="CreateContainer within sandbox \"0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb\"" Dec 12 17:36:48.107567 containerd[1539]: time="2025-12-12T17:36:48.107461896Z" level=info msg="StartContainer for \"7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb\"" Dec 12 17:36:48.109287 containerd[1539]: time="2025-12-12T17:36:48.109256017Z" level=info msg="connecting to shim 7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb" address="unix:///run/containerd/s/0836ebe2f3baabdee55899d5bbc6dc97abc41c96b852441b3248cf52333df6c3" protocol=ttrpc version=3 Dec 12 17:36:48.140530 systemd[1]: Started cri-containerd-7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb.scope - libcontainer container 7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb. Dec 12 17:36:48.205906 systemd[1]: cri-containerd-7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb.scope: Deactivated successfully. Dec 12 17:36:48.212920 containerd[1539]: time="2025-12-12T17:36:48.212878637Z" level=info msg="received container exit event container_id:\"7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb\" id:\"7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb\" pid:3219 exited_at:{seconds:1765561008 nanos:210379731}" Dec 12 17:36:48.217309 containerd[1539]: time="2025-12-12T17:36:48.217163266Z" level=info msg="StartContainer for \"7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb\" returns successfully" Dec 12 17:36:48.318273 containerd[1539]: time="2025-12-12T17:36:48.318146942Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:48.319408 containerd[1539]: time="2025-12-12T17:36:48.319380596Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Dec 12 17:36:48.320267 containerd[1539]: time="2025-12-12T17:36:48.320216538Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:48.321665 containerd[1539]: time="2025-12-12T17:36:48.321624908Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.586679276s" Dec 12 17:36:48.321711 containerd[1539]: time="2025-12-12T17:36:48.321671787Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 12 17:36:48.325735 containerd[1539]: time="2025-12-12T17:36:48.325685661Z" level=info msg="CreateContainer within sandbox \"4b51736586e2eb0798242df2f997bf175faa0f81d82d3d43991e0e1587c96d67\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 12 17:36:48.332324 containerd[1539]: time="2025-12-12T17:36:48.332284119Z" level=info msg="Container ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:48.337561 containerd[1539]: time="2025-12-12T17:36:48.337443969Z" level=info msg="CreateContainer within sandbox \"4b51736586e2eb0798242df2f997bf175faa0f81d82d3d43991e0e1587c96d67\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c\"" Dec 12 17:36:48.338005 containerd[1539]: time="2025-12-12T17:36:48.337973277Z" level=info msg="StartContainer for \"ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c\"" Dec 12 17:36:48.339021 containerd[1539]: time="2025-12-12T17:36:48.338982056Z" level=info msg="connecting to shim ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c" address="unix:///run/containerd/s/3b314147efa3d0243bd0f888ff80cbfab58279143d043a3847e6b15bbef4e883" protocol=ttrpc version=3 Dec 12 17:36:48.359454 systemd[1]: Started cri-containerd-ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c.scope - libcontainer container ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c. Dec 12 17:36:48.391232 containerd[1539]: time="2025-12-12T17:36:48.391145058Z" level=info msg="StartContainer for \"ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c\" returns successfully" Dec 12 17:36:49.065283 containerd[1539]: time="2025-12-12T17:36:49.065224442Z" level=info msg="CreateContainer within sandbox \"0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 17:36:49.073144 kubelet[2679]: I1212 17:36:49.073071 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-nmfhk" podStartSLOduration=1.48472864 podStartE2EDuration="9.073053322s" podCreationTimestamp="2025-12-12 17:36:40 +0000 UTC" firstStartedPulling="2025-12-12 17:36:40.733957332 +0000 UTC m=+7.831531912" lastFinishedPulling="2025-12-12 17:36:48.322281974 +0000 UTC m=+15.419856594" observedRunningTime="2025-12-12 17:36:49.072014544 +0000 UTC m=+16.169589164" watchObservedRunningTime="2025-12-12 17:36:49.073053322 +0000 UTC m=+16.170627942" Dec 12 17:36:49.093276 containerd[1539]: time="2025-12-12T17:36:49.091428348Z" level=info msg="Container 94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:49.104988 containerd[1539]: time="2025-12-12T17:36:49.104856634Z" level=info msg="CreateContainer within sandbox \"0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064\"" Dec 12 17:36:49.107638 containerd[1539]: time="2025-12-12T17:36:49.106448361Z" level=info msg="StartContainer for \"94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064\"" Dec 12 17:36:49.107862 containerd[1539]: time="2025-12-12T17:36:49.107834373Z" level=info msg="connecting to shim 94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064" address="unix:///run/containerd/s/0836ebe2f3baabdee55899d5bbc6dc97abc41c96b852441b3248cf52333df6c3" protocol=ttrpc version=3 Dec 12 17:36:49.141500 systemd[1]: Started cri-containerd-94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064.scope - libcontainer container 94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064. Dec 12 17:36:49.165369 systemd[1]: cri-containerd-94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064.scope: Deactivated successfully. Dec 12 17:36:49.167433 containerd[1539]: time="2025-12-12T17:36:49.167389838Z" level=info msg="received container exit event container_id:\"94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064\" id:\"94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064\" pid:3300 exited_at:{seconds:1765561009 nanos:166420618}" Dec 12 17:36:49.168974 containerd[1539]: time="2025-12-12T17:36:49.168805889Z" level=info msg="StartContainer for \"94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064\" returns successfully" Dec 12 17:36:49.756773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064-rootfs.mount: Deactivated successfully. Dec 12 17:36:50.071201 containerd[1539]: time="2025-12-12T17:36:50.071004630Z" level=info msg="CreateContainer within sandbox \"0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 17:36:50.117960 containerd[1539]: time="2025-12-12T17:36:50.116003995Z" level=info msg="Container 3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:50.124600 containerd[1539]: time="2025-12-12T17:36:50.124547749Z" level=info msg="CreateContainer within sandbox \"0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a\"" Dec 12 17:36:50.125469 containerd[1539]: time="2025-12-12T17:36:50.125358054Z" level=info msg="StartContainer for \"3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a\"" Dec 12 17:36:50.126726 containerd[1539]: time="2025-12-12T17:36:50.126552070Z" level=info msg="connecting to shim 3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a" address="unix:///run/containerd/s/0836ebe2f3baabdee55899d5bbc6dc97abc41c96b852441b3248cf52333df6c3" protocol=ttrpc version=3 Dec 12 17:36:50.145422 systemd[1]: Started cri-containerd-3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a.scope - libcontainer container 3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a. Dec 12 17:36:50.234477 containerd[1539]: time="2025-12-12T17:36:50.234438493Z" level=info msg="StartContainer for \"3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a\" returns successfully" Dec 12 17:36:50.420498 kubelet[2679]: I1212 17:36:50.420199 2679 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 17:36:50.466739 systemd[1]: Created slice kubepods-burstable-pod71a6f82b_03be_4508_b1b4_44d97b9666a5.slice - libcontainer container kubepods-burstable-pod71a6f82b_03be_4508_b1b4_44d97b9666a5.slice. Dec 12 17:36:50.471842 systemd[1]: Created slice kubepods-burstable-podf65ee825_d68a_435c_bccd_6c3bf2c9b7ae.slice - libcontainer container kubepods-burstable-podf65ee825_d68a_435c_bccd_6c3bf2c9b7ae.slice. Dec 12 17:36:50.504087 kubelet[2679]: I1212 17:36:50.504035 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txcq5\" (UniqueName: \"kubernetes.io/projected/71a6f82b-03be-4508-b1b4-44d97b9666a5-kube-api-access-txcq5\") pod \"coredns-668d6bf9bc-5p6zw\" (UID: \"71a6f82b-03be-4508-b1b4-44d97b9666a5\") " pod="kube-system/coredns-668d6bf9bc-5p6zw" Dec 12 17:36:50.504224 kubelet[2679]: I1212 17:36:50.504102 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq2rw\" (UniqueName: \"kubernetes.io/projected/f65ee825-d68a-435c-bccd-6c3bf2c9b7ae-kube-api-access-fq2rw\") pod \"coredns-668d6bf9bc-z2jpf\" (UID: \"f65ee825-d68a-435c-bccd-6c3bf2c9b7ae\") " pod="kube-system/coredns-668d6bf9bc-z2jpf" Dec 12 17:36:50.504224 kubelet[2679]: I1212 17:36:50.504153 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71a6f82b-03be-4508-b1b4-44d97b9666a5-config-volume\") pod \"coredns-668d6bf9bc-5p6zw\" (UID: \"71a6f82b-03be-4508-b1b4-44d97b9666a5\") " pod="kube-system/coredns-668d6bf9bc-5p6zw" Dec 12 17:36:50.504224 kubelet[2679]: I1212 17:36:50.504179 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f65ee825-d68a-435c-bccd-6c3bf2c9b7ae-config-volume\") pod \"coredns-668d6bf9bc-z2jpf\" (UID: \"f65ee825-d68a-435c-bccd-6c3bf2c9b7ae\") " pod="kube-system/coredns-668d6bf9bc-z2jpf" Dec 12 17:36:50.773370 containerd[1539]: time="2025-12-12T17:36:50.773295097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5p6zw,Uid:71a6f82b-03be-4508-b1b4-44d97b9666a5,Namespace:kube-system,Attempt:0,}" Dec 12 17:36:50.777036 containerd[1539]: time="2025-12-12T17:36:50.777003745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z2jpf,Uid:f65ee825-d68a-435c-bccd-6c3bf2c9b7ae,Namespace:kube-system,Attempt:0,}" Dec 12 17:36:51.092680 kubelet[2679]: I1212 17:36:51.092234 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9kgfg" podStartSLOduration=4.879437522 podStartE2EDuration="11.092215938s" podCreationTimestamp="2025-12-12 17:36:40 +0000 UTC" firstStartedPulling="2025-12-12 17:36:40.521828064 +0000 UTC m=+7.619402684" lastFinishedPulling="2025-12-12 17:36:46.73460648 +0000 UTC m=+13.832181100" observedRunningTime="2025-12-12 17:36:51.091895064 +0000 UTC m=+18.189469684" watchObservedRunningTime="2025-12-12 17:36:51.092215938 +0000 UTC m=+18.189790558" Dec 12 17:36:52.307722 systemd-networkd[1444]: cilium_host: Link UP Dec 12 17:36:52.307839 systemd-networkd[1444]: cilium_net: Link UP Dec 12 17:36:52.307960 systemd-networkd[1444]: cilium_net: Gained carrier Dec 12 17:36:52.308074 systemd-networkd[1444]: cilium_host: Gained carrier Dec 12 17:36:52.396354 systemd-networkd[1444]: cilium_vxlan: Link UP Dec 12 17:36:52.396362 systemd-networkd[1444]: cilium_vxlan: Gained carrier Dec 12 17:36:52.464398 systemd-networkd[1444]: cilium_host: Gained IPv6LL Dec 12 17:36:52.472362 systemd-networkd[1444]: cilium_net: Gained IPv6LL Dec 12 17:36:52.700311 kernel: NET: Registered PF_ALG protocol family Dec 12 17:36:53.356885 systemd-networkd[1444]: lxc_health: Link UP Dec 12 17:36:53.357977 systemd-networkd[1444]: lxc_health: Gained carrier Dec 12 17:36:53.832880 systemd-networkd[1444]: lxc65e7a6a0a003: Link UP Dec 12 17:36:53.845882 kernel: eth0: renamed from tmpd1b7c Dec 12 17:36:53.845983 kernel: eth0: renamed from tmpd6480 Dec 12 17:36:53.848905 systemd-networkd[1444]: lxc932f0ea4b3a3: Link UP Dec 12 17:36:53.849102 systemd-networkd[1444]: cilium_vxlan: Gained IPv6LL Dec 12 17:36:53.850346 systemd-networkd[1444]: lxc65e7a6a0a003: Gained carrier Dec 12 17:36:53.852683 systemd-networkd[1444]: lxc932f0ea4b3a3: Gained carrier Dec 12 17:36:55.311838 systemd-networkd[1444]: lxc_health: Gained IPv6LL Dec 12 17:36:55.567411 systemd-networkd[1444]: lxc65e7a6a0a003: Gained IPv6LL Dec 12 17:36:55.695739 systemd-networkd[1444]: lxc932f0ea4b3a3: Gained IPv6LL Dec 12 17:36:57.460580 containerd[1539]: time="2025-12-12T17:36:57.460536927Z" level=info msg="connecting to shim d1b7c7e1679d866ccf8701e01d4aea6b83406f12e3be3701660a34d42660106a" address="unix:///run/containerd/s/06ac74fa6003fa042f308a27bd5afca2617f0dc6b1df50b2220ed399cd247322" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:36:57.462453 containerd[1539]: time="2025-12-12T17:36:57.462413420Z" level=info msg="connecting to shim d64808e5a9b49f64dd305795e227f702669127505f130aba2341bce9dbc0973d" address="unix:///run/containerd/s/60bbfbd0bdf0b3b593e23d7f3b12048d70d93587a083537654193d152a6700c8" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:36:57.494402 systemd[1]: Started cri-containerd-d1b7c7e1679d866ccf8701e01d4aea6b83406f12e3be3701660a34d42660106a.scope - libcontainer container d1b7c7e1679d866ccf8701e01d4aea6b83406f12e3be3701660a34d42660106a. Dec 12 17:36:57.496949 systemd[1]: Started cri-containerd-d64808e5a9b49f64dd305795e227f702669127505f130aba2341bce9dbc0973d.scope - libcontainer container d64808e5a9b49f64dd305795e227f702669127505f130aba2341bce9dbc0973d. Dec 12 17:36:57.506717 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:36:57.507995 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:36:57.534216 containerd[1539]: time="2025-12-12T17:36:57.534175601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z2jpf,Uid:f65ee825-d68a-435c-bccd-6c3bf2c9b7ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1b7c7e1679d866ccf8701e01d4aea6b83406f12e3be3701660a34d42660106a\"" Dec 12 17:36:57.534980 containerd[1539]: time="2025-12-12T17:36:57.534954429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5p6zw,Uid:71a6f82b-03be-4508-b1b4-44d97b9666a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d64808e5a9b49f64dd305795e227f702669127505f130aba2341bce9dbc0973d\"" Dec 12 17:36:57.540137 containerd[1539]: time="2025-12-12T17:36:57.540109556Z" level=info msg="CreateContainer within sandbox \"d64808e5a9b49f64dd305795e227f702669127505f130aba2341bce9dbc0973d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:36:57.541540 containerd[1539]: time="2025-12-12T17:36:57.541506536Z" level=info msg="CreateContainer within sandbox \"d1b7c7e1679d866ccf8701e01d4aea6b83406f12e3be3701660a34d42660106a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:36:57.548888 containerd[1539]: time="2025-12-12T17:36:57.548853232Z" level=info msg="Container aab0699ed9883595d5b5d6527263c182d391d894e1ca689e7ef850e674d25d61: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:57.554758 containerd[1539]: time="2025-12-12T17:36:57.554726149Z" level=info msg="CreateContainer within sandbox \"d64808e5a9b49f64dd305795e227f702669127505f130aba2341bce9dbc0973d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aab0699ed9883595d5b5d6527263c182d391d894e1ca689e7ef850e674d25d61\"" Dec 12 17:36:57.556305 containerd[1539]: time="2025-12-12T17:36:57.556280006Z" level=info msg="StartContainer for \"aab0699ed9883595d5b5d6527263c182d391d894e1ca689e7ef850e674d25d61\"" Dec 12 17:36:57.557050 containerd[1539]: time="2025-12-12T17:36:57.557027716Z" level=info msg="connecting to shim aab0699ed9883595d5b5d6527263c182d391d894e1ca689e7ef850e674d25d61" address="unix:///run/containerd/s/60bbfbd0bdf0b3b593e23d7f3b12048d70d93587a083537654193d152a6700c8" protocol=ttrpc version=3 Dec 12 17:36:57.557967 containerd[1539]: time="2025-12-12T17:36:57.557940023Z" level=info msg="Container 735028ad16466ea526a169f4e5d4c7353f5ae1122aa068fe531a5781a1f52ff7: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:57.566196 containerd[1539]: time="2025-12-12T17:36:57.566162226Z" level=info msg="CreateContainer within sandbox \"d1b7c7e1679d866ccf8701e01d4aea6b83406f12e3be3701660a34d42660106a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"735028ad16466ea526a169f4e5d4c7353f5ae1122aa068fe531a5781a1f52ff7\"" Dec 12 17:36:57.567058 containerd[1539]: time="2025-12-12T17:36:57.567015214Z" level=info msg="StartContainer for \"735028ad16466ea526a169f4e5d4c7353f5ae1122aa068fe531a5781a1f52ff7\"" Dec 12 17:36:57.567938 containerd[1539]: time="2025-12-12T17:36:57.567914481Z" level=info msg="connecting to shim 735028ad16466ea526a169f4e5d4c7353f5ae1122aa068fe531a5781a1f52ff7" address="unix:///run/containerd/s/06ac74fa6003fa042f308a27bd5afca2617f0dc6b1df50b2220ed399cd247322" protocol=ttrpc version=3 Dec 12 17:36:57.578393 systemd[1]: Started cri-containerd-aab0699ed9883595d5b5d6527263c182d391d894e1ca689e7ef850e674d25d61.scope - libcontainer container aab0699ed9883595d5b5d6527263c182d391d894e1ca689e7ef850e674d25d61. Dec 12 17:36:57.581470 systemd[1]: Started cri-containerd-735028ad16466ea526a169f4e5d4c7353f5ae1122aa068fe531a5781a1f52ff7.scope - libcontainer container 735028ad16466ea526a169f4e5d4c7353f5ae1122aa068fe531a5781a1f52ff7. Dec 12 17:36:57.612350 containerd[1539]: time="2025-12-12T17:36:57.611812017Z" level=info msg="StartContainer for \"aab0699ed9883595d5b5d6527263c182d391d894e1ca689e7ef850e674d25d61\" returns successfully" Dec 12 17:36:57.620085 containerd[1539]: time="2025-12-12T17:36:57.620039060Z" level=info msg="StartContainer for \"735028ad16466ea526a169f4e5d4c7353f5ae1122aa068fe531a5781a1f52ff7\" returns successfully" Dec 12 17:36:58.108612 kubelet[2679]: I1212 17:36:58.108457 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-z2jpf" podStartSLOduration=18.108440661 podStartE2EDuration="18.108440661s" podCreationTimestamp="2025-12-12 17:36:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:36:58.107471754 +0000 UTC m=+25.205046454" watchObservedRunningTime="2025-12-12 17:36:58.108440661 +0000 UTC m=+25.206015281" Dec 12 17:36:58.120814 kubelet[2679]: I1212 17:36:58.120512 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5p6zw" podStartSLOduration=18.120492817 podStartE2EDuration="18.120492817s" podCreationTimestamp="2025-12-12 17:36:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:36:58.119218874 +0000 UTC m=+25.216793494" watchObservedRunningTime="2025-12-12 17:36:58.120492817 +0000 UTC m=+25.218067437" Dec 12 17:37:01.496427 kubelet[2679]: I1212 17:37:01.496378 2679 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:37:03.397785 systemd[1]: Started sshd@7-10.0.0.78:22-10.0.0.1:45224.service - OpenSSH per-connection server daemon (10.0.0.1:45224). Dec 12 17:37:03.463218 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 45224 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:37:03.464539 sshd-session[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:03.470748 systemd-logind[1512]: New session 8 of user core. Dec 12 17:37:03.476450 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:37:03.612166 sshd[4031]: Connection closed by 10.0.0.1 port 45224 Dec 12 17:37:03.611691 sshd-session[4028]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:03.615648 systemd[1]: sshd@7-10.0.0.78:22-10.0.0.1:45224.service: Deactivated successfully. Dec 12 17:37:03.618454 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:37:03.619092 systemd-logind[1512]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:37:03.620713 systemd-logind[1512]: Removed session 8. Dec 12 17:37:08.632606 systemd[1]: Started sshd@8-10.0.0.78:22-10.0.0.1:45232.service - OpenSSH per-connection server daemon (10.0.0.1:45232). Dec 12 17:37:08.719363 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 45232 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:37:08.720904 sshd-session[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:08.729289 systemd-logind[1512]: New session 9 of user core. Dec 12 17:37:08.735616 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:37:08.858919 sshd[4049]: Connection closed by 10.0.0.1 port 45232 Dec 12 17:37:08.859462 sshd-session[4046]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:08.866019 systemd[1]: sshd@8-10.0.0.78:22-10.0.0.1:45232.service: Deactivated successfully. Dec 12 17:37:08.866410 systemd-logind[1512]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:37:08.868187 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:37:08.872387 systemd-logind[1512]: Removed session 9. Dec 12 17:37:13.879070 systemd[1]: Started sshd@9-10.0.0.78:22-10.0.0.1:52098.service - OpenSSH per-connection server daemon (10.0.0.1:52098). Dec 12 17:37:13.953499 sshd[4066]: Accepted publickey for core from 10.0.0.1 port 52098 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:37:13.956498 sshd-session[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:13.963501 systemd-logind[1512]: New session 10 of user core. Dec 12 17:37:13.972731 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:37:14.143989 sshd[4069]: Connection closed by 10.0.0.1 port 52098 Dec 12 17:37:14.144645 sshd-session[4066]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:14.150184 systemd[1]: sshd@9-10.0.0.78:22-10.0.0.1:52098.service: Deactivated successfully. Dec 12 17:37:14.152729 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:37:14.157633 systemd-logind[1512]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:37:14.159299 systemd-logind[1512]: Removed session 10. Dec 12 17:37:19.159408 systemd[1]: Started sshd@10-10.0.0.78:22-10.0.0.1:52108.service - OpenSSH per-connection server daemon (10.0.0.1:52108). Dec 12 17:37:19.227499 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 52108 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:37:19.228908 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:19.232806 systemd-logind[1512]: New session 11 of user core. Dec 12 17:37:19.245408 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:37:19.359402 sshd[4087]: Connection closed by 10.0.0.1 port 52108 Dec 12 17:37:19.359739 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:19.373800 systemd[1]: sshd@10-10.0.0.78:22-10.0.0.1:52108.service: Deactivated successfully. Dec 12 17:37:19.375919 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:37:19.377068 systemd-logind[1512]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:37:19.379515 systemd-logind[1512]: Removed session 11. Dec 12 17:37:19.381151 systemd[1]: Started sshd@11-10.0.0.78:22-10.0.0.1:52122.service - OpenSSH per-connection server daemon (10.0.0.1:52122). Dec 12 17:37:19.439226 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 52122 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:37:19.440317 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:19.445518 systemd-logind[1512]: New session 12 of user core. Dec 12 17:37:19.452424 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:37:19.612551 sshd[4105]: Connection closed by 10.0.0.1 port 52122 Dec 12 17:37:19.612760 sshd-session[4102]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:19.624087 systemd[1]: sshd@11-10.0.0.78:22-10.0.0.1:52122.service: Deactivated successfully. Dec 12 17:37:19.626722 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:37:19.627876 systemd-logind[1512]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:37:19.633652 systemd[1]: Started sshd@12-10.0.0.78:22-10.0.0.1:52126.service - OpenSSH per-connection server daemon (10.0.0.1:52126). Dec 12 17:37:19.637307 systemd-logind[1512]: Removed session 12. Dec 12 17:37:19.689479 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 52126 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:37:19.690794 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:19.694810 systemd-logind[1512]: New session 13 of user core. Dec 12 17:37:19.704410 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:37:19.822297 sshd[4121]: Connection closed by 10.0.0.1 port 52126 Dec 12 17:37:19.822807 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:19.826165 systemd[1]: sshd@12-10.0.0.78:22-10.0.0.1:52126.service: Deactivated successfully. Dec 12 17:37:19.829675 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:37:19.830341 systemd-logind[1512]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:37:19.833637 systemd-logind[1512]: Removed session 13. Dec 12 17:37:24.839732 systemd[1]: Started sshd@13-10.0.0.78:22-10.0.0.1:43670.service - OpenSSH per-connection server daemon (10.0.0.1:43670). Dec 12 17:37:24.899465 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 43670 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:37:24.901298 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:24.908452 systemd-logind[1512]: New session 14 of user core. Dec 12 17:37:24.918460 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:37:25.055384 sshd[4138]: Connection closed by 10.0.0.1 port 43670 Dec 12 17:37:25.055880 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:25.068333 systemd[1]: sshd@13-10.0.0.78:22-10.0.0.1:43670.service: Deactivated successfully. Dec 12 17:37:25.069947 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:37:25.071151 systemd-logind[1512]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:37:25.072901 systemd[1]: Started sshd@14-10.0.0.78:22-10.0.0.1:43678.service - OpenSSH per-connection server daemon (10.0.0.1:43678). Dec 12 17:37:25.074863 systemd-logind[1512]: Removed session 14. Dec 12 17:37:25.139863 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 43678 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:37:25.141209 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:25.145673 systemd-logind[1512]: New session 15 of user core. Dec 12 17:37:25.156436 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:37:25.338229 sshd[4155]: Connection closed by 10.0.0.1 port 43678 Dec 12 17:37:25.338670 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:25.351930 systemd[1]: sshd@14-10.0.0.78:22-10.0.0.1:43678.service: Deactivated successfully. Dec 12 17:37:25.354036 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:37:25.354924 systemd-logind[1512]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:37:25.357661 systemd[1]: Started sshd@15-10.0.0.78:22-10.0.0.1:43686.service - OpenSSH per-connection server daemon (10.0.0.1:43686). Dec 12 17:37:25.358181 systemd-logind[1512]: Removed session 15. Dec 12 17:37:25.420976 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 43686 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:37:25.422833 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:25.427196 systemd-logind[1512]: New session 16 of user core. Dec 12 17:37:25.438448 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:37:25.940127 sshd[4169]: Connection closed by 10.0.0.1 port 43686 Dec 12 17:37:25.940357 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:25.951358 systemd[1]: sshd@15-10.0.0.78:22-10.0.0.1:43686.service: Deactivated successfully. Dec 12 17:37:25.954093 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:37:25.956882 systemd-logind[1512]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:37:25.961139 systemd[1]: Started sshd@16-10.0.0.78:22-10.0.0.1:43688.service - OpenSSH per-connection server daemon (10.0.0.1:43688). Dec 12 17:37:25.961770 systemd-logind[1512]: Removed session 16. Dec 12 17:37:26.014522 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 43688 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:37:26.015811 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:26.020339 systemd-logind[1512]: New session 17 of user core. Dec 12 17:37:26.034442 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:37:26.255782 sshd[4191]: Connection closed by 10.0.0.1 port 43688 Dec 12 17:37:26.256164 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:26.269979 systemd[1]: sshd@16-10.0.0.78:22-10.0.0.1:43688.service: Deactivated successfully. Dec 12 17:37:26.272043 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:37:26.273025 systemd-logind[1512]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:37:26.275206 systemd[1]: Started sshd@17-10.0.0.78:22-10.0.0.1:43698.service - OpenSSH per-connection server daemon (10.0.0.1:43698). Dec 12 17:37:26.276223 systemd-logind[1512]: Removed session 17. Dec 12 17:37:26.345236 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 43698 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:37:26.346649 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:26.351323 systemd-logind[1512]: New session 18 of user core. Dec 12 17:37:26.360427 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:37:26.468861 sshd[4205]: Connection closed by 10.0.0.1 port 43698 Dec 12 17:37:26.469225 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:26.472624 systemd[1]: sshd@17-10.0.0.78:22-10.0.0.1:43698.service: Deactivated successfully. Dec 12 17:37:26.474130 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:37:26.476428 systemd-logind[1512]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:37:26.477177 systemd-logind[1512]: Removed session 18. Dec 12 17:37:31.487476 systemd[1]: Started sshd@18-10.0.0.78:22-10.0.0.1:48956.service - OpenSSH per-connection server daemon (10.0.0.1:48956). Dec 12 17:37:31.566749 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 48956 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:37:31.568663 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:31.572932 systemd-logind[1512]: New session 19 of user core. Dec 12 17:37:31.582413 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:37:31.704838 sshd[4224]: Connection closed by 10.0.0.1 port 48956 Dec 12 17:37:31.704687 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:31.712454 systemd[1]: sshd@18-10.0.0.78:22-10.0.0.1:48956.service: Deactivated successfully. Dec 12 17:37:31.715636 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:37:31.716409 systemd-logind[1512]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:37:31.717578 systemd-logind[1512]: Removed session 19. Dec 12 17:37:36.722065 systemd[1]: Started sshd@19-10.0.0.78:22-10.0.0.1:49074.service - OpenSSH per-connection server daemon (10.0.0.1:49074). Dec 12 17:37:36.780826 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 49074 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:37:36.782133 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:36.788330 systemd-logind[1512]: New session 20 of user core. Dec 12 17:37:36.795433 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:37:36.922386 sshd[4242]: Connection closed by 10.0.0.1 port 49074 Dec 12 17:37:36.922759 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:36.928041 systemd[1]: sshd@19-10.0.0.78:22-10.0.0.1:49074.service: Deactivated successfully. Dec 12 17:37:36.930025 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:37:36.931149 systemd-logind[1512]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:37:36.932710 systemd-logind[1512]: Removed session 20. Dec 12 17:37:41.936495 systemd[1]: Started sshd@20-10.0.0.78:22-10.0.0.1:54720.service - OpenSSH per-connection server daemon (10.0.0.1:54720). Dec 12 17:37:42.009832 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 54720 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:37:42.011103 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:42.015496 systemd-logind[1512]: New session 21 of user core. Dec 12 17:37:42.031440 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 17:37:42.142311 sshd[4260]: Connection closed by 10.0.0.1 port 54720 Dec 12 17:37:42.143083 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:42.153023 systemd[1]: sshd@20-10.0.0.78:22-10.0.0.1:54720.service: Deactivated successfully. Dec 12 17:37:42.156167 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 17:37:42.156946 systemd-logind[1512]: Session 21 logged out. Waiting for processes to exit. Dec 12 17:37:42.160733 systemd[1]: Started sshd@21-10.0.0.78:22-10.0.0.1:54730.service - OpenSSH per-connection server daemon (10.0.0.1:54730). Dec 12 17:37:42.161524 systemd-logind[1512]: Removed session 21. Dec 12 17:37:42.226653 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 54730 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:37:42.229016 sshd-session[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:42.233086 systemd-logind[1512]: New session 22 of user core. Dec 12 17:37:42.239944 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 17:37:44.414840 containerd[1539]: time="2025-12-12T17:37:44.414677951Z" level=info msg="StopContainer for \"ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c\" with timeout 30 (s)" Dec 12 17:37:44.422888 containerd[1539]: time="2025-12-12T17:37:44.422830156Z" level=info msg="Stop container \"ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c\" with signal terminated" Dec 12 17:37:44.428657 containerd[1539]: time="2025-12-12T17:37:44.428618616Z" level=info msg="StopContainer for \"3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a\" with timeout 2 (s)" Dec 12 17:37:44.428826 containerd[1539]: time="2025-12-12T17:37:44.428690537Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:37:44.430344 containerd[1539]: time="2025-12-12T17:37:44.430311834Z" level=info msg="Stop container \"3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a\" with signal terminated" Dec 12 17:37:44.436561 systemd[1]: cri-containerd-ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c.scope: Deactivated successfully. Dec 12 17:37:44.439115 containerd[1539]: time="2025-12-12T17:37:44.439079645Z" level=info msg="received container exit event container_id:\"ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c\" id:\"ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c\" pid:3265 exited_at:{seconds:1765561064 nanos:438845483}" Dec 12 17:37:44.440922 systemd-networkd[1444]: lxc_health: Link DOWN Dec 12 17:37:44.440928 systemd-networkd[1444]: lxc_health: Lost carrier Dec 12 17:37:44.461423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c-rootfs.mount: Deactivated successfully. Dec 12 17:37:44.462363 systemd[1]: cri-containerd-3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a.scope: Deactivated successfully. Dec 12 17:37:44.462748 systemd[1]: cri-containerd-3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a.scope: Consumed 6.369s CPU time, 124.4M memory peak, 136K read from disk, 12.9M written to disk. Dec 12 17:37:44.463818 containerd[1539]: time="2025-12-12T17:37:44.463721022Z" level=info msg="received container exit event container_id:\"3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a\" id:\"3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a\" pid:3337 exited_at:{seconds:1765561064 nanos:462577890}" Dec 12 17:37:44.476294 containerd[1539]: time="2025-12-12T17:37:44.475616745Z" level=info msg="StopContainer for \"ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c\" returns successfully" Dec 12 17:37:44.478885 containerd[1539]: time="2025-12-12T17:37:44.478841379Z" level=info msg="StopPodSandbox for \"4b51736586e2eb0798242df2f997bf175faa0f81d82d3d43991e0e1587c96d67\"" Dec 12 17:37:44.485203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a-rootfs.mount: Deactivated successfully. Dec 12 17:37:44.488774 containerd[1539]: time="2025-12-12T17:37:44.488695322Z" level=info msg="Container to stop \"ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:37:44.494371 containerd[1539]: time="2025-12-12T17:37:44.494332540Z" level=info msg="StopContainer for \"3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a\" returns successfully" Dec 12 17:37:44.495109 containerd[1539]: time="2025-12-12T17:37:44.495077068Z" level=info msg="StopPodSandbox for \"0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190\"" Dec 12 17:37:44.495176 containerd[1539]: time="2025-12-12T17:37:44.495144949Z" level=info msg="Container to stop \"1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:37:44.495176 containerd[1539]: time="2025-12-12T17:37:44.495158389Z" level=info msg="Container to stop \"94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:37:44.495176 containerd[1539]: time="2025-12-12T17:37:44.495167669Z" level=info msg="Container to stop \"3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:37:44.495269 containerd[1539]: time="2025-12-12T17:37:44.495176629Z" level=info msg="Container to stop \"93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:37:44.495269 containerd[1539]: time="2025-12-12T17:37:44.495185069Z" level=info msg="Container to stop \"7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 17:37:44.501270 systemd[1]: cri-containerd-0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190.scope: Deactivated successfully. Dec 12 17:37:44.502537 systemd[1]: cri-containerd-4b51736586e2eb0798242df2f997bf175faa0f81d82d3d43991e0e1587c96d67.scope: Deactivated successfully. Dec 12 17:37:44.502722 containerd[1539]: time="2025-12-12T17:37:44.502560706Z" level=info msg="received sandbox exit event container_id:\"0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190\" id:\"0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190\" exit_status:137 exited_at:{seconds:1765561064 nanos:502406184}" monitor_name=podsandbox Dec 12 17:37:44.518850 containerd[1539]: time="2025-12-12T17:37:44.518800875Z" level=info msg="received sandbox exit event container_id:\"4b51736586e2eb0798242df2f997bf175faa0f81d82d3d43991e0e1587c96d67\" id:\"4b51736586e2eb0798242df2f997bf175faa0f81d82d3d43991e0e1587c96d67\" exit_status:137 exited_at:{seconds:1765561064 nanos:504238403}" monitor_name=podsandbox Dec 12 17:37:44.521065 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190-rootfs.mount: Deactivated successfully. Dec 12 17:37:44.532179 containerd[1539]: time="2025-12-12T17:37:44.532091453Z" level=info msg="shim disconnected" id=0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190 namespace=k8s.io Dec 12 17:37:44.532451 containerd[1539]: time="2025-12-12T17:37:44.532165454Z" level=warning msg="cleaning up after shim disconnected" id=0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190 namespace=k8s.io Dec 12 17:37:44.532510 containerd[1539]: time="2025-12-12T17:37:44.532496938Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 17:37:44.539959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b51736586e2eb0798242df2f997bf175faa0f81d82d3d43991e0e1587c96d67-rootfs.mount: Deactivated successfully. Dec 12 17:37:44.547288 containerd[1539]: time="2025-12-12T17:37:44.547015569Z" level=info msg="shim disconnected" id=4b51736586e2eb0798242df2f997bf175faa0f81d82d3d43991e0e1587c96d67 namespace=k8s.io Dec 12 17:37:44.547288 containerd[1539]: time="2025-12-12T17:37:44.547092650Z" level=warning msg="cleaning up after shim disconnected" id=4b51736586e2eb0798242df2f997bf175faa0f81d82d3d43991e0e1587c96d67 namespace=k8s.io Dec 12 17:37:44.547288 containerd[1539]: time="2025-12-12T17:37:44.547126730Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 17:37:44.560185 containerd[1539]: time="2025-12-12T17:37:44.559773862Z" level=info msg="received sandbox container exit event sandbox_id:\"4b51736586e2eb0798242df2f997bf175faa0f81d82d3d43991e0e1587c96d67\" exit_status:137 exited_at:{seconds:1765561064 nanos:504238403}" monitor_name=criService Dec 12 17:37:44.560185 containerd[1539]: time="2025-12-12T17:37:44.559875743Z" level=info msg="received sandbox container exit event sandbox_id:\"0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190\" exit_status:137 exited_at:{seconds:1765561064 nanos:502406184}" monitor_name=criService Dec 12 17:37:44.560185 containerd[1539]: time="2025-12-12T17:37:44.560154786Z" level=info msg="TearDown network for sandbox \"4b51736586e2eb0798242df2f997bf175faa0f81d82d3d43991e0e1587c96d67\" successfully" Dec 12 17:37:44.560185 containerd[1539]: time="2025-12-12T17:37:44.560176586Z" level=info msg="StopPodSandbox for \"4b51736586e2eb0798242df2f997bf175faa0f81d82d3d43991e0e1587c96d67\" returns successfully" Dec 12 17:37:44.560857 containerd[1539]: time="2025-12-12T17:37:44.560829073Z" level=info msg="TearDown network for sandbox \"0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190\" successfully" Dec 12 17:37:44.560931 containerd[1539]: time="2025-12-12T17:37:44.560918313Z" level=info msg="StopPodSandbox for \"0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190\" returns successfully" Dec 12 17:37:44.561932 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b51736586e2eb0798242df2f997bf175faa0f81d82d3d43991e0e1587c96d67-shm.mount: Deactivated successfully. Dec 12 17:37:44.663290 kubelet[2679]: I1212 17:37:44.663077 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-cni-path\") pod \"16a452c7-d7af-4f39-96f2-acbbafd66d28\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " Dec 12 17:37:44.663290 kubelet[2679]: I1212 17:37:44.663128 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-xtables-lock\") pod \"16a452c7-d7af-4f39-96f2-acbbafd66d28\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " Dec 12 17:37:44.663290 kubelet[2679]: I1212 17:37:44.663143 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-cilium-run\") pod \"16a452c7-d7af-4f39-96f2-acbbafd66d28\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " Dec 12 17:37:44.663290 kubelet[2679]: I1212 17:37:44.663158 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-host-proc-sys-net\") pod \"16a452c7-d7af-4f39-96f2-acbbafd66d28\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " Dec 12 17:37:44.663290 kubelet[2679]: I1212 17:37:44.663182 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16a452c7-d7af-4f39-96f2-acbbafd66d28-hubble-tls\") pod \"16a452c7-d7af-4f39-96f2-acbbafd66d28\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " Dec 12 17:37:44.663290 kubelet[2679]: I1212 17:37:44.663200 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-hostproc\") pod \"16a452c7-d7af-4f39-96f2-acbbafd66d28\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " Dec 12 17:37:44.663810 kubelet[2679]: I1212 17:37:44.663217 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-cilium-cgroup\") pod \"16a452c7-d7af-4f39-96f2-acbbafd66d28\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " Dec 12 17:37:44.664004 kubelet[2679]: I1212 17:37:44.663234 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-etc-cni-netd\") pod \"16a452c7-d7af-4f39-96f2-acbbafd66d28\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " Dec 12 17:37:44.667551 kubelet[2679]: I1212 17:37:44.666444 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16a452c7-d7af-4f39-96f2-acbbafd66d28-clustermesh-secrets\") pod \"16a452c7-d7af-4f39-96f2-acbbafd66d28\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " Dec 12 17:37:44.667551 kubelet[2679]: I1212 17:37:44.666483 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-bpf-maps\") pod \"16a452c7-d7af-4f39-96f2-acbbafd66d28\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " Dec 12 17:37:44.667551 kubelet[2679]: I1212 17:37:44.666503 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16a452c7-d7af-4f39-96f2-acbbafd66d28-cilium-config-path\") pod \"16a452c7-d7af-4f39-96f2-acbbafd66d28\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " Dec 12 17:37:44.667551 kubelet[2679]: I1212 17:37:44.666522 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2g5r\" (UniqueName: \"kubernetes.io/projected/d48cebf7-fd63-4ddd-8d60-10b990c30aca-kube-api-access-h2g5r\") pod \"d48cebf7-fd63-4ddd-8d60-10b990c30aca\" (UID: \"d48cebf7-fd63-4ddd-8d60-10b990c30aca\") " Dec 12 17:37:44.667551 kubelet[2679]: I1212 17:37:44.666541 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d48cebf7-fd63-4ddd-8d60-10b990c30aca-cilium-config-path\") pod \"d48cebf7-fd63-4ddd-8d60-10b990c30aca\" (UID: \"d48cebf7-fd63-4ddd-8d60-10b990c30aca\") " Dec 12 17:37:44.667551 kubelet[2679]: I1212 17:37:44.666564 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsnmk\" (UniqueName: \"kubernetes.io/projected/16a452c7-d7af-4f39-96f2-acbbafd66d28-kube-api-access-tsnmk\") pod \"16a452c7-d7af-4f39-96f2-acbbafd66d28\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " Dec 12 17:37:44.667757 kubelet[2679]: I1212 17:37:44.666583 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-host-proc-sys-kernel\") pod \"16a452c7-d7af-4f39-96f2-acbbafd66d28\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " Dec 12 17:37:44.667757 kubelet[2679]: I1212 17:37:44.666598 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-lib-modules\") pod \"16a452c7-d7af-4f39-96f2-acbbafd66d28\" (UID: \"16a452c7-d7af-4f39-96f2-acbbafd66d28\") " Dec 12 17:37:44.667757 kubelet[2679]: I1212 17:37:44.666356 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "16a452c7-d7af-4f39-96f2-acbbafd66d28" (UID: "16a452c7-d7af-4f39-96f2-acbbafd66d28"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:37:44.667757 kubelet[2679]: I1212 17:37:44.666356 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "16a452c7-d7af-4f39-96f2-acbbafd66d28" (UID: "16a452c7-d7af-4f39-96f2-acbbafd66d28"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:37:44.667757 kubelet[2679]: I1212 17:37:44.666398 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-cni-path" (OuterVolumeSpecName: "cni-path") pod "16a452c7-d7af-4f39-96f2-acbbafd66d28" (UID: "16a452c7-d7af-4f39-96f2-acbbafd66d28"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:37:44.667863 kubelet[2679]: I1212 17:37:44.666400 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "16a452c7-d7af-4f39-96f2-acbbafd66d28" (UID: "16a452c7-d7af-4f39-96f2-acbbafd66d28"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:37:44.667863 kubelet[2679]: I1212 17:37:44.666412 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "16a452c7-d7af-4f39-96f2-acbbafd66d28" (UID: "16a452c7-d7af-4f39-96f2-acbbafd66d28"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:37:44.667863 kubelet[2679]: I1212 17:37:44.666660 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "16a452c7-d7af-4f39-96f2-acbbafd66d28" (UID: "16a452c7-d7af-4f39-96f2-acbbafd66d28"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:37:44.667863 kubelet[2679]: I1212 17:37:44.667458 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "16a452c7-d7af-4f39-96f2-acbbafd66d28" (UID: "16a452c7-d7af-4f39-96f2-acbbafd66d28"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:37:44.669934 kubelet[2679]: I1212 17:37:44.669728 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "16a452c7-d7af-4f39-96f2-acbbafd66d28" (UID: "16a452c7-d7af-4f39-96f2-acbbafd66d28"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:37:44.670587 kubelet[2679]: I1212 17:37:44.670534 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-hostproc" (OuterVolumeSpecName: "hostproc") pod "16a452c7-d7af-4f39-96f2-acbbafd66d28" (UID: "16a452c7-d7af-4f39-96f2-acbbafd66d28"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:37:44.670675 kubelet[2679]: I1212 17:37:44.670655 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16a452c7-d7af-4f39-96f2-acbbafd66d28-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "16a452c7-d7af-4f39-96f2-acbbafd66d28" (UID: "16a452c7-d7af-4f39-96f2-acbbafd66d28"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:37:44.670722 kubelet[2679]: I1212 17:37:44.670700 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "16a452c7-d7af-4f39-96f2-acbbafd66d28" (UID: "16a452c7-d7af-4f39-96f2-acbbafd66d28"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 17:37:44.672316 kubelet[2679]: I1212 17:37:44.672285 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16a452c7-d7af-4f39-96f2-acbbafd66d28-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "16a452c7-d7af-4f39-96f2-acbbafd66d28" (UID: "16a452c7-d7af-4f39-96f2-acbbafd66d28"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:37:44.672495 kubelet[2679]: I1212 17:37:44.672378 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d48cebf7-fd63-4ddd-8d60-10b990c30aca-kube-api-access-h2g5r" (OuterVolumeSpecName: "kube-api-access-h2g5r") pod "d48cebf7-fd63-4ddd-8d60-10b990c30aca" (UID: "d48cebf7-fd63-4ddd-8d60-10b990c30aca"). InnerVolumeSpecName "kube-api-access-h2g5r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:37:44.672829 kubelet[2679]: I1212 17:37:44.672779 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48cebf7-fd63-4ddd-8d60-10b990c30aca-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d48cebf7-fd63-4ddd-8d60-10b990c30aca" (UID: "d48cebf7-fd63-4ddd-8d60-10b990c30aca"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:37:44.674219 kubelet[2679]: I1212 17:37:44.674187 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16a452c7-d7af-4f39-96f2-acbbafd66d28-kube-api-access-tsnmk" (OuterVolumeSpecName: "kube-api-access-tsnmk") pod "16a452c7-d7af-4f39-96f2-acbbafd66d28" (UID: "16a452c7-d7af-4f39-96f2-acbbafd66d28"). InnerVolumeSpecName "kube-api-access-tsnmk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:37:44.677357 kubelet[2679]: I1212 17:37:44.677313 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16a452c7-d7af-4f39-96f2-acbbafd66d28-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "16a452c7-d7af-4f39-96f2-acbbafd66d28" (UID: "16a452c7-d7af-4f39-96f2-acbbafd66d28"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:37:44.767727 kubelet[2679]: I1212 17:37:44.767663 2679 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tsnmk\" (UniqueName: \"kubernetes.io/projected/16a452c7-d7af-4f39-96f2-acbbafd66d28-kube-api-access-tsnmk\") on node \"localhost\" DevicePath \"\"" Dec 12 17:37:44.767727 kubelet[2679]: I1212 17:37:44.767711 2679 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 12 17:37:44.767727 kubelet[2679]: I1212 17:37:44.767723 2679 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 12 17:37:44.767727 kubelet[2679]: I1212 17:37:44.767733 2679 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 12 17:37:44.767727 kubelet[2679]: I1212 17:37:44.767743 2679 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 12 17:37:44.767942 kubelet[2679]: I1212 17:37:44.767750 2679 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 12 17:37:44.767942 kubelet[2679]: I1212 17:37:44.767758 2679 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 12 17:37:44.767942 kubelet[2679]: I1212 17:37:44.767765 2679 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 12 17:37:44.767942 kubelet[2679]: I1212 17:37:44.767772 2679 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 12 17:37:44.767942 kubelet[2679]: I1212 17:37:44.767779 2679 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16a452c7-d7af-4f39-96f2-acbbafd66d28-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 12 17:37:44.767942 kubelet[2679]: I1212 17:37:44.767791 2679 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 12 17:37:44.767942 kubelet[2679]: I1212 17:37:44.767799 2679 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16a452c7-d7af-4f39-96f2-acbbafd66d28-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 12 17:37:44.767942 kubelet[2679]: I1212 17:37:44.767806 2679 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16a452c7-d7af-4f39-96f2-acbbafd66d28-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 12 17:37:44.768106 kubelet[2679]: I1212 17:37:44.767814 2679 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16a452c7-d7af-4f39-96f2-acbbafd66d28-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 12 17:37:44.768106 kubelet[2679]: I1212 17:37:44.767822 2679 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h2g5r\" (UniqueName: \"kubernetes.io/projected/d48cebf7-fd63-4ddd-8d60-10b990c30aca-kube-api-access-h2g5r\") on node \"localhost\" DevicePath \"\"" Dec 12 17:37:44.768106 kubelet[2679]: I1212 17:37:44.767830 2679 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d48cebf7-fd63-4ddd-8d60-10b990c30aca-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 12 17:37:45.009262 systemd[1]: Removed slice kubepods-besteffort-podd48cebf7_fd63_4ddd_8d60_10b990c30aca.slice - libcontainer container kubepods-besteffort-podd48cebf7_fd63_4ddd_8d60_10b990c30aca.slice. Dec 12 17:37:45.013324 systemd[1]: Removed slice kubepods-burstable-pod16a452c7_d7af_4f39_96f2_acbbafd66d28.slice - libcontainer container kubepods-burstable-pod16a452c7_d7af_4f39_96f2_acbbafd66d28.slice. Dec 12 17:37:45.013428 systemd[1]: kubepods-burstable-pod16a452c7_d7af_4f39_96f2_acbbafd66d28.slice: Consumed 6.464s CPU time, 124.8M memory peak, 140K read from disk, 12.9M written to disk. Dec 12 17:37:45.211258 kubelet[2679]: I1212 17:37:45.211204 2679 scope.go:117] "RemoveContainer" containerID="ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c" Dec 12 17:37:45.217360 containerd[1539]: time="2025-12-12T17:37:45.217297456Z" level=info msg="RemoveContainer for \"ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c\"" Dec 12 17:37:45.223893 containerd[1539]: time="2025-12-12T17:37:45.223024593Z" level=info msg="RemoveContainer for \"ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c\" returns successfully" Dec 12 17:37:45.224457 kubelet[2679]: I1212 17:37:45.223372 2679 scope.go:117] "RemoveContainer" containerID="ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c" Dec 12 17:37:45.235320 containerd[1539]: time="2025-12-12T17:37:45.223621479Z" level=error msg="ContainerStatus for \"ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c\": not found" Dec 12 17:37:45.236173 kubelet[2679]: E1212 17:37:45.236112 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c\": not found" containerID="ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c" Dec 12 17:37:45.245862 kubelet[2679]: I1212 17:37:45.245742 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c"} err="failed to get container status \"ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac1dd7ab24a59f8589d813353e0ba8dfb1f8777e412fb5b9e2d88e1a9679b22c\": not found" Dec 12 17:37:45.245862 kubelet[2679]: I1212 17:37:45.245867 2679 scope.go:117] "RemoveContainer" containerID="3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a" Dec 12 17:37:45.248267 containerd[1539]: time="2025-12-12T17:37:45.247915042Z" level=info msg="RemoveContainer for \"3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a\"" Dec 12 17:37:45.252732 containerd[1539]: time="2025-12-12T17:37:45.252679329Z" level=info msg="RemoveContainer for \"3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a\" returns successfully" Dec 12 17:37:45.253159 kubelet[2679]: I1212 17:37:45.253132 2679 scope.go:117] "RemoveContainer" containerID="94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064" Dec 12 17:37:45.254802 containerd[1539]: time="2025-12-12T17:37:45.254772670Z" level=info msg="RemoveContainer for \"94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064\"" Dec 12 17:37:45.258551 containerd[1539]: time="2025-12-12T17:37:45.258497468Z" level=info msg="RemoveContainer for \"94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064\" returns successfully" Dec 12 17:37:45.258775 kubelet[2679]: I1212 17:37:45.258749 2679 scope.go:117] "RemoveContainer" containerID="7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb" Dec 12 17:37:45.261453 containerd[1539]: time="2025-12-12T17:37:45.261119214Z" level=info msg="RemoveContainer for \"7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb\"" Dec 12 17:37:45.268204 containerd[1539]: time="2025-12-12T17:37:45.268099244Z" level=info msg="RemoveContainer for \"7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb\" returns successfully" Dec 12 17:37:45.270062 kubelet[2679]: I1212 17:37:45.270037 2679 scope.go:117] "RemoveContainer" containerID="93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82" Dec 12 17:37:45.271857 containerd[1539]: time="2025-12-12T17:37:45.271805001Z" level=info msg="RemoveContainer for \"93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82\"" Dec 12 17:37:45.276057 containerd[1539]: time="2025-12-12T17:37:45.276009763Z" level=info msg="RemoveContainer for \"93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82\" returns successfully" Dec 12 17:37:45.276318 kubelet[2679]: I1212 17:37:45.276293 2679 scope.go:117] "RemoveContainer" containerID="1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200" Dec 12 17:37:45.277908 containerd[1539]: time="2025-12-12T17:37:45.277877461Z" level=info msg="RemoveContainer for \"1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200\"" Dec 12 17:37:45.283067 containerd[1539]: time="2025-12-12T17:37:45.283022873Z" level=info msg="RemoveContainer for \"1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200\" returns successfully" Dec 12 17:37:45.283439 kubelet[2679]: I1212 17:37:45.283404 2679 scope.go:117] "RemoveContainer" containerID="3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a" Dec 12 17:37:45.283682 containerd[1539]: time="2025-12-12T17:37:45.283643399Z" level=error msg="ContainerStatus for \"3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a\": not found" Dec 12 17:37:45.283853 kubelet[2679]: E1212 17:37:45.283814 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a\": not found" containerID="3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a" Dec 12 17:37:45.283928 kubelet[2679]: I1212 17:37:45.283849 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a"} err="failed to get container status \"3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a\": rpc error: code = NotFound desc = an error occurred when try to find container \"3dae4450ce5af8345ece4fa88c64a027e4decca06a847de1578c8c65a9facb2a\": not found" Dec 12 17:37:45.283928 kubelet[2679]: I1212 17:37:45.283873 2679 scope.go:117] "RemoveContainer" containerID="94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064" Dec 12 17:37:45.284097 containerd[1539]: time="2025-12-12T17:37:45.284064563Z" level=error msg="ContainerStatus for \"94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064\": not found" Dec 12 17:37:45.284202 kubelet[2679]: E1212 17:37:45.284180 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064\": not found" containerID="94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064" Dec 12 17:37:45.284232 kubelet[2679]: I1212 17:37:45.284206 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064"} err="failed to get container status \"94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064\": rpc error: code = NotFound desc = an error occurred when try to find container \"94094e233804e082c49aa5eab6e47fc9868b3c73311d0b5a0325fc221469e064\": not found" Dec 12 17:37:45.284232 kubelet[2679]: I1212 17:37:45.284226 2679 scope.go:117] "RemoveContainer" containerID="7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb" Dec 12 17:37:45.284679 containerd[1539]: time="2025-12-12T17:37:45.284649049Z" level=error msg="ContainerStatus for \"7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb\": not found" Dec 12 17:37:45.284969 kubelet[2679]: E1212 17:37:45.284894 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb\": not found" containerID="7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb" Dec 12 17:37:45.285060 kubelet[2679]: I1212 17:37:45.285039 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb"} err="failed to get container status \"7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c3c10c4cbebfaebae55b417b93bab88845082b4b5de85c4cab88affa33928fb\": not found" Dec 12 17:37:45.285217 kubelet[2679]: I1212 17:37:45.285117 2679 scope.go:117] "RemoveContainer" containerID="93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82" Dec 12 17:37:45.285358 containerd[1539]: time="2025-12-12T17:37:45.285326376Z" level=error msg="ContainerStatus for \"93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82\": not found" Dec 12 17:37:45.285662 kubelet[2679]: E1212 17:37:45.285638 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82\": not found" containerID="93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82" Dec 12 17:37:45.285707 kubelet[2679]: I1212 17:37:45.285669 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82"} err="failed to get container status \"93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82\": rpc error: code = NotFound desc = an error occurred when try to find container \"93f4202cb0fa2115a4b6b432c193765215a0f52b2f536a9a549ebec08b0e8b82\": not found" Dec 12 17:37:45.285707 kubelet[2679]: I1212 17:37:45.285690 2679 scope.go:117] "RemoveContainer" containerID="1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200" Dec 12 17:37:45.285924 containerd[1539]: time="2025-12-12T17:37:45.285893181Z" level=error msg="ContainerStatus for \"1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200\": not found" Dec 12 17:37:45.286045 kubelet[2679]: E1212 17:37:45.286028 2679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200\": not found" containerID="1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200" Dec 12 17:37:45.286084 kubelet[2679]: I1212 17:37:45.286050 2679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200"} err="failed to get container status \"1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d761bdf084417aa5b2d0fb7f706020224e8c9d22ab131a3236062229bf73200\": not found" Dec 12 17:37:45.461028 systemd[1]: var-lib-kubelet-pods-d48cebf7\x2dfd63\x2d4ddd\x2d8d60\x2d10b990c30aca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh2g5r.mount: Deactivated successfully. Dec 12 17:37:45.461145 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0b4f995114679742057ae3d94adfb32b782f82ba74914057d0e490a1d6b98190-shm.mount: Deactivated successfully. Dec 12 17:37:45.461198 systemd[1]: var-lib-kubelet-pods-16a452c7\x2dd7af\x2d4f39\x2d96f2\x2dacbbafd66d28-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtsnmk.mount: Deactivated successfully. Dec 12 17:37:45.461274 systemd[1]: var-lib-kubelet-pods-16a452c7\x2dd7af\x2d4f39\x2d96f2\x2dacbbafd66d28-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 12 17:37:45.461327 systemd[1]: var-lib-kubelet-pods-16a452c7\x2dd7af\x2d4f39\x2d96f2\x2dacbbafd66d28-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 12 17:37:46.356016 sshd[4276]: Connection closed by 10.0.0.1 port 54730 Dec 12 17:37:46.355472 sshd-session[4273]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:46.369658 systemd[1]: sshd@21-10.0.0.78:22-10.0.0.1:54730.service: Deactivated successfully. Dec 12 17:37:46.371530 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 17:37:46.371731 systemd[1]: session-22.scope: Consumed 1.455s CPU time, 24.6M memory peak. Dec 12 17:37:46.375825 systemd[1]: Started sshd@22-10.0.0.78:22-10.0.0.1:54872.service - OpenSSH per-connection server daemon (10.0.0.1:54872). Dec 12 17:37:46.376844 systemd-logind[1512]: Session 22 logged out. Waiting for processes to exit. Dec 12 17:37:46.382560 systemd-logind[1512]: Removed session 22. Dec 12 17:37:46.435846 sshd[4420]: Accepted publickey for core from 10.0.0.1 port 54872 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:37:46.438071 sshd-session[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:46.443788 systemd-logind[1512]: New session 23 of user core. Dec 12 17:37:46.454405 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 17:37:47.005258 kubelet[2679]: I1212 17:37:47.004439 2679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16a452c7-d7af-4f39-96f2-acbbafd66d28" path="/var/lib/kubelet/pods/16a452c7-d7af-4f39-96f2-acbbafd66d28/volumes" Dec 12 17:37:47.005258 kubelet[2679]: I1212 17:37:47.005003 2679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d48cebf7-fd63-4ddd-8d60-10b990c30aca" path="/var/lib/kubelet/pods/d48cebf7-fd63-4ddd-8d60-10b990c30aca/volumes" Dec 12 17:37:47.762817 sshd[4423]: Connection closed by 10.0.0.1 port 54872 Dec 12 17:37:47.763637 sshd-session[4420]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:47.780859 systemd[1]: sshd@22-10.0.0.78:22-10.0.0.1:54872.service: Deactivated successfully. Dec 12 17:37:47.783737 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 17:37:47.783920 systemd[1]: session-23.scope: Consumed 1.220s CPU time, 23.7M memory peak. Dec 12 17:37:47.787783 systemd-logind[1512]: Session 23 logged out. Waiting for processes to exit. Dec 12 17:37:47.794705 systemd[1]: Started sshd@23-10.0.0.78:22-10.0.0.1:54886.service - OpenSSH per-connection server daemon (10.0.0.1:54886). Dec 12 17:37:47.796847 kubelet[2679]: I1212 17:37:47.796367 2679 memory_manager.go:355] "RemoveStaleState removing state" podUID="16a452c7-d7af-4f39-96f2-acbbafd66d28" containerName="cilium-agent" Dec 12 17:37:47.796847 kubelet[2679]: I1212 17:37:47.796453 2679 memory_manager.go:355] "RemoveStaleState removing state" podUID="d48cebf7-fd63-4ddd-8d60-10b990c30aca" containerName="cilium-operator" Dec 12 17:37:47.796855 systemd-logind[1512]: Removed session 23. Dec 12 17:37:47.816918 systemd[1]: Created slice kubepods-burstable-pod2e46cf28_bdaf_484f_a03f_26227fa48f60.slice - libcontainer container kubepods-burstable-pod2e46cf28_bdaf_484f_a03f_26227fa48f60.slice. Dec 12 17:37:47.861862 sshd[4435]: Accepted publickey for core from 10.0.0.1 port 54886 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:37:47.863056 sshd-session[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:47.867199 systemd-logind[1512]: New session 24 of user core. Dec 12 17:37:47.881505 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 17:37:47.888372 kubelet[2679]: I1212 17:37:47.888343 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e46cf28-bdaf-484f-a03f-26227fa48f60-cilium-cgroup\") pod \"cilium-c296p\" (UID: \"2e46cf28-bdaf-484f-a03f-26227fa48f60\") " pod="kube-system/cilium-c296p" Dec 12 17:37:47.888454 kubelet[2679]: I1212 17:37:47.888384 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e46cf28-bdaf-484f-a03f-26227fa48f60-bpf-maps\") pod \"cilium-c296p\" (UID: \"2e46cf28-bdaf-484f-a03f-26227fa48f60\") " pod="kube-system/cilium-c296p" Dec 12 17:37:47.888454 kubelet[2679]: I1212 17:37:47.888414 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e46cf28-bdaf-484f-a03f-26227fa48f60-host-proc-sys-net\") pod \"cilium-c296p\" (UID: \"2e46cf28-bdaf-484f-a03f-26227fa48f60\") " pod="kube-system/cilium-c296p" Dec 12 17:37:47.888454 kubelet[2679]: I1212 17:37:47.888429 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e46cf28-bdaf-484f-a03f-26227fa48f60-cni-path\") pod \"cilium-c296p\" (UID: \"2e46cf28-bdaf-484f-a03f-26227fa48f60\") " pod="kube-system/cilium-c296p" Dec 12 17:37:47.888454 kubelet[2679]: I1212 17:37:47.888445 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e46cf28-bdaf-484f-a03f-26227fa48f60-etc-cni-netd\") pod \"cilium-c296p\" (UID: \"2e46cf28-bdaf-484f-a03f-26227fa48f60\") " pod="kube-system/cilium-c296p" Dec 12 17:37:47.888543 kubelet[2679]: I1212 17:37:47.888460 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnlwf\" (UniqueName: \"kubernetes.io/projected/2e46cf28-bdaf-484f-a03f-26227fa48f60-kube-api-access-jnlwf\") pod \"cilium-c296p\" (UID: \"2e46cf28-bdaf-484f-a03f-26227fa48f60\") " pod="kube-system/cilium-c296p" Dec 12 17:37:47.888543 kubelet[2679]: I1212 17:37:47.888484 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e46cf28-bdaf-484f-a03f-26227fa48f60-xtables-lock\") pod \"cilium-c296p\" (UID: \"2e46cf28-bdaf-484f-a03f-26227fa48f60\") " pod="kube-system/cilium-c296p" Dec 12 17:37:47.888543 kubelet[2679]: I1212 17:37:47.888501 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2e46cf28-bdaf-484f-a03f-26227fa48f60-cilium-ipsec-secrets\") pod \"cilium-c296p\" (UID: \"2e46cf28-bdaf-484f-a03f-26227fa48f60\") " pod="kube-system/cilium-c296p" Dec 12 17:37:47.888543 kubelet[2679]: I1212 17:37:47.888519 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e46cf28-bdaf-484f-a03f-26227fa48f60-lib-modules\") pod \"cilium-c296p\" (UID: \"2e46cf28-bdaf-484f-a03f-26227fa48f60\") " pod="kube-system/cilium-c296p" Dec 12 17:37:47.888543 kubelet[2679]: I1212 17:37:47.888535 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e46cf28-bdaf-484f-a03f-26227fa48f60-hostproc\") pod \"cilium-c296p\" (UID: \"2e46cf28-bdaf-484f-a03f-26227fa48f60\") " pod="kube-system/cilium-c296p" Dec 12 17:37:47.888637 kubelet[2679]: I1212 17:37:47.888556 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e46cf28-bdaf-484f-a03f-26227fa48f60-cilium-config-path\") pod \"cilium-c296p\" (UID: \"2e46cf28-bdaf-484f-a03f-26227fa48f60\") " pod="kube-system/cilium-c296p" Dec 12 17:37:47.888637 kubelet[2679]: I1212 17:37:47.888572 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e46cf28-bdaf-484f-a03f-26227fa48f60-host-proc-sys-kernel\") pod \"cilium-c296p\" (UID: \"2e46cf28-bdaf-484f-a03f-26227fa48f60\") " pod="kube-system/cilium-c296p" Dec 12 17:37:47.888637 kubelet[2679]: I1212 17:37:47.888589 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e46cf28-bdaf-484f-a03f-26227fa48f60-cilium-run\") pod \"cilium-c296p\" (UID: \"2e46cf28-bdaf-484f-a03f-26227fa48f60\") " pod="kube-system/cilium-c296p" Dec 12 17:37:47.888637 kubelet[2679]: I1212 17:37:47.888602 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e46cf28-bdaf-484f-a03f-26227fa48f60-hubble-tls\") pod \"cilium-c296p\" (UID: \"2e46cf28-bdaf-484f-a03f-26227fa48f60\") " pod="kube-system/cilium-c296p" Dec 12 17:37:47.888637 kubelet[2679]: I1212 17:37:47.888619 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e46cf28-bdaf-484f-a03f-26227fa48f60-clustermesh-secrets\") pod \"cilium-c296p\" (UID: \"2e46cf28-bdaf-484f-a03f-26227fa48f60\") " pod="kube-system/cilium-c296p" Dec 12 17:37:47.931340 sshd[4438]: Connection closed by 10.0.0.1 port 54886 Dec 12 17:37:47.931643 sshd-session[4435]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:47.941836 systemd[1]: sshd@23-10.0.0.78:22-10.0.0.1:54886.service: Deactivated successfully. Dec 12 17:37:47.943820 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 17:37:47.944962 systemd-logind[1512]: Session 24 logged out. Waiting for processes to exit. Dec 12 17:37:47.949355 systemd[1]: Started sshd@24-10.0.0.78:22-10.0.0.1:54898.service - OpenSSH per-connection server daemon (10.0.0.1:54898). Dec 12 17:37:47.951564 systemd-logind[1512]: Removed session 24. Dec 12 17:37:48.014970 sshd[4445]: Accepted publickey for core from 10.0.0.1 port 54898 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:37:48.016680 sshd-session[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:48.020086 systemd-logind[1512]: New session 25 of user core. Dec 12 17:37:48.026407 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 17:37:48.064385 kubelet[2679]: E1212 17:37:48.064349 2679 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 12 17:37:48.123053 containerd[1539]: time="2025-12-12T17:37:48.122998696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c296p,Uid:2e46cf28-bdaf-484f-a03f-26227fa48f60,Namespace:kube-system,Attempt:0,}" Dec 12 17:37:48.140042 containerd[1539]: time="2025-12-12T17:37:48.139978486Z" level=info msg="connecting to shim 9368d80e1777d108e4c89d9b27576de5ce49e5ac2ed63b63a3f4324b1b0d22e7" address="unix:///run/containerd/s/e783b23c7b1cd78ffaab0f7474405fda3a2df0dbe10c0e2ee38af4d6857c5f88" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:37:48.169472 systemd[1]: Started cri-containerd-9368d80e1777d108e4c89d9b27576de5ce49e5ac2ed63b63a3f4324b1b0d22e7.scope - libcontainer container 9368d80e1777d108e4c89d9b27576de5ce49e5ac2ed63b63a3f4324b1b0d22e7. Dec 12 17:37:48.189901 containerd[1539]: time="2025-12-12T17:37:48.189851325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c296p,Uid:2e46cf28-bdaf-484f-a03f-26227fa48f60,Namespace:kube-system,Attempt:0,} returns sandbox id \"9368d80e1777d108e4c89d9b27576de5ce49e5ac2ed63b63a3f4324b1b0d22e7\"" Dec 12 17:37:48.192626 containerd[1539]: time="2025-12-12T17:37:48.192578029Z" level=info msg="CreateContainer within sandbox \"9368d80e1777d108e4c89d9b27576de5ce49e5ac2ed63b63a3f4324b1b0d22e7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 17:37:48.211010 containerd[1539]: time="2025-12-12T17:37:48.210958511Z" level=info msg="Container a4f7c4dc8170d95d7e672e685d499a5a959b86d83a687b540b05a129cc3501f5: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:37:48.220457 containerd[1539]: time="2025-12-12T17:37:48.220411914Z" level=info msg="CreateContainer within sandbox \"9368d80e1777d108e4c89d9b27576de5ce49e5ac2ed63b63a3f4324b1b0d22e7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a4f7c4dc8170d95d7e672e685d499a5a959b86d83a687b540b05a129cc3501f5\"" Dec 12 17:37:48.220960 containerd[1539]: time="2025-12-12T17:37:48.220939519Z" level=info msg="StartContainer for \"a4f7c4dc8170d95d7e672e685d499a5a959b86d83a687b540b05a129cc3501f5\"" Dec 12 17:37:48.221856 containerd[1539]: time="2025-12-12T17:37:48.221811847Z" level=info msg="connecting to shim a4f7c4dc8170d95d7e672e685d499a5a959b86d83a687b540b05a129cc3501f5" address="unix:///run/containerd/s/e783b23c7b1cd78ffaab0f7474405fda3a2df0dbe10c0e2ee38af4d6857c5f88" protocol=ttrpc version=3 Dec 12 17:37:48.243439 systemd[1]: Started cri-containerd-a4f7c4dc8170d95d7e672e685d499a5a959b86d83a687b540b05a129cc3501f5.scope - libcontainer container a4f7c4dc8170d95d7e672e685d499a5a959b86d83a687b540b05a129cc3501f5. Dec 12 17:37:48.272860 containerd[1539]: time="2025-12-12T17:37:48.272447213Z" level=info msg="StartContainer for \"a4f7c4dc8170d95d7e672e685d499a5a959b86d83a687b540b05a129cc3501f5\" returns successfully" Dec 12 17:37:48.279160 systemd[1]: cri-containerd-a4f7c4dc8170d95d7e672e685d499a5a959b86d83a687b540b05a129cc3501f5.scope: Deactivated successfully. Dec 12 17:37:48.280392 containerd[1539]: time="2025-12-12T17:37:48.280336402Z" level=info msg="received container exit event container_id:\"a4f7c4dc8170d95d7e672e685d499a5a959b86d83a687b540b05a129cc3501f5\" id:\"a4f7c4dc8170d95d7e672e685d499a5a959b86d83a687b540b05a129cc3501f5\" pid:4520 exited_at:{seconds:1765561068 nanos:278934710}" Dec 12 17:37:49.245812 containerd[1539]: time="2025-12-12T17:37:49.244141923Z" level=info msg="CreateContainer within sandbox \"9368d80e1777d108e4c89d9b27576de5ce49e5ac2ed63b63a3f4324b1b0d22e7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 17:37:49.259970 containerd[1539]: time="2025-12-12T17:37:49.259919856Z" level=info msg="Container cc9c30e725092459d54846e69da468ddc75ab2afc2a1ae22b4311980e2c6ccc9: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:37:49.260879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1952054982.mount: Deactivated successfully. Dec 12 17:37:49.272656 containerd[1539]: time="2025-12-12T17:37:49.272598163Z" level=info msg="CreateContainer within sandbox \"9368d80e1777d108e4c89d9b27576de5ce49e5ac2ed63b63a3f4324b1b0d22e7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cc9c30e725092459d54846e69da468ddc75ab2afc2a1ae22b4311980e2c6ccc9\"" Dec 12 17:37:49.273937 containerd[1539]: time="2025-12-12T17:37:49.273422170Z" level=info msg="StartContainer for \"cc9c30e725092459d54846e69da468ddc75ab2afc2a1ae22b4311980e2c6ccc9\"" Dec 12 17:37:49.274560 containerd[1539]: time="2025-12-12T17:37:49.274534899Z" level=info msg="connecting to shim cc9c30e725092459d54846e69da468ddc75ab2afc2a1ae22b4311980e2c6ccc9" address="unix:///run/containerd/s/e783b23c7b1cd78ffaab0f7474405fda3a2df0dbe10c0e2ee38af4d6857c5f88" protocol=ttrpc version=3 Dec 12 17:37:49.312476 systemd[1]: Started cri-containerd-cc9c30e725092459d54846e69da468ddc75ab2afc2a1ae22b4311980e2c6ccc9.scope - libcontainer container cc9c30e725092459d54846e69da468ddc75ab2afc2a1ae22b4311980e2c6ccc9. Dec 12 17:37:49.343215 containerd[1539]: time="2025-12-12T17:37:49.343177078Z" level=info msg="StartContainer for \"cc9c30e725092459d54846e69da468ddc75ab2afc2a1ae22b4311980e2c6ccc9\" returns successfully" Dec 12 17:37:49.350882 systemd[1]: cri-containerd-cc9c30e725092459d54846e69da468ddc75ab2afc2a1ae22b4311980e2c6ccc9.scope: Deactivated successfully. Dec 12 17:37:49.353358 containerd[1539]: time="2025-12-12T17:37:49.353235123Z" level=info msg="received container exit event container_id:\"cc9c30e725092459d54846e69da468ddc75ab2afc2a1ae22b4311980e2c6ccc9\" id:\"cc9c30e725092459d54846e69da468ddc75ab2afc2a1ae22b4311980e2c6ccc9\" pid:4569 exited_at:{seconds:1765561069 nanos:353004961}" Dec 12 17:37:49.376343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc9c30e725092459d54846e69da468ddc75ab2afc2a1ae22b4311980e2c6ccc9-rootfs.mount: Deactivated successfully. Dec 12 17:37:50.247699 containerd[1539]: time="2025-12-12T17:37:50.247647423Z" level=info msg="CreateContainer within sandbox \"9368d80e1777d108e4c89d9b27576de5ce49e5ac2ed63b63a3f4324b1b0d22e7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 17:37:50.261116 containerd[1539]: time="2025-12-12T17:37:50.261047892Z" level=info msg="Container c3838fa121d62c3810aaa57a8fd82f58c8540ee9e8cdd8793f2e238d748769ed: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:37:50.267434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2470813472.mount: Deactivated successfully. Dec 12 17:37:50.274408 containerd[1539]: time="2025-12-12T17:37:50.274359199Z" level=info msg="CreateContainer within sandbox \"9368d80e1777d108e4c89d9b27576de5ce49e5ac2ed63b63a3f4324b1b0d22e7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c3838fa121d62c3810aaa57a8fd82f58c8540ee9e8cdd8793f2e238d748769ed\"" Dec 12 17:37:50.275306 containerd[1539]: time="2025-12-12T17:37:50.275232526Z" level=info msg="StartContainer for \"c3838fa121d62c3810aaa57a8fd82f58c8540ee9e8cdd8793f2e238d748769ed\"" Dec 12 17:37:50.277896 containerd[1539]: time="2025-12-12T17:37:50.277866947Z" level=info msg="connecting to shim c3838fa121d62c3810aaa57a8fd82f58c8540ee9e8cdd8793f2e238d748769ed" address="unix:///run/containerd/s/e783b23c7b1cd78ffaab0f7474405fda3a2df0dbe10c0e2ee38af4d6857c5f88" protocol=ttrpc version=3 Dec 12 17:37:50.304461 systemd[1]: Started cri-containerd-c3838fa121d62c3810aaa57a8fd82f58c8540ee9e8cdd8793f2e238d748769ed.scope - libcontainer container c3838fa121d62c3810aaa57a8fd82f58c8540ee9e8cdd8793f2e238d748769ed. Dec 12 17:37:50.381814 containerd[1539]: time="2025-12-12T17:37:50.381776627Z" level=info msg="StartContainer for \"c3838fa121d62c3810aaa57a8fd82f58c8540ee9e8cdd8793f2e238d748769ed\" returns successfully" Dec 12 17:37:50.383406 systemd[1]: cri-containerd-c3838fa121d62c3810aaa57a8fd82f58c8540ee9e8cdd8793f2e238d748769ed.scope: Deactivated successfully. Dec 12 17:37:50.386018 containerd[1539]: time="2025-12-12T17:37:50.385982621Z" level=info msg="received container exit event container_id:\"c3838fa121d62c3810aaa57a8fd82f58c8540ee9e8cdd8793f2e238d748769ed\" id:\"c3838fa121d62c3810aaa57a8fd82f58c8540ee9e8cdd8793f2e238d748769ed\" pid:4613 exited_at:{seconds:1765561070 nanos:385496617}" Dec 12 17:37:50.410816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3838fa121d62c3810aaa57a8fd82f58c8540ee9e8cdd8793f2e238d748769ed-rootfs.mount: Deactivated successfully. Dec 12 17:37:51.252451 containerd[1539]: time="2025-12-12T17:37:51.252409896Z" level=info msg="CreateContainer within sandbox \"9368d80e1777d108e4c89d9b27576de5ce49e5ac2ed63b63a3f4324b1b0d22e7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 17:37:51.261727 containerd[1539]: time="2025-12-12T17:37:51.260402998Z" level=info msg="Container 73ca51ebf47064dd4476db46fb8b497c9a06aa1370170dde805ad262456b283f: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:37:51.268714 containerd[1539]: time="2025-12-12T17:37:51.268668462Z" level=info msg="CreateContainer within sandbox \"9368d80e1777d108e4c89d9b27576de5ce49e5ac2ed63b63a3f4324b1b0d22e7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"73ca51ebf47064dd4476db46fb8b497c9a06aa1370170dde805ad262456b283f\"" Dec 12 17:37:51.269280 containerd[1539]: time="2025-12-12T17:37:51.269069825Z" level=info msg="StartContainer for \"73ca51ebf47064dd4476db46fb8b497c9a06aa1370170dde805ad262456b283f\"" Dec 12 17:37:51.270113 containerd[1539]: time="2025-12-12T17:37:51.270084793Z" level=info msg="connecting to shim 73ca51ebf47064dd4476db46fb8b497c9a06aa1370170dde805ad262456b283f" address="unix:///run/containerd/s/e783b23c7b1cd78ffaab0f7474405fda3a2df0dbe10c0e2ee38af4d6857c5f88" protocol=ttrpc version=3 Dec 12 17:37:51.289451 systemd[1]: Started cri-containerd-73ca51ebf47064dd4476db46fb8b497c9a06aa1370170dde805ad262456b283f.scope - libcontainer container 73ca51ebf47064dd4476db46fb8b497c9a06aa1370170dde805ad262456b283f. Dec 12 17:37:51.310619 systemd[1]: cri-containerd-73ca51ebf47064dd4476db46fb8b497c9a06aa1370170dde805ad262456b283f.scope: Deactivated successfully. Dec 12 17:37:51.313513 containerd[1539]: time="2025-12-12T17:37:51.313393008Z" level=info msg="received container exit event container_id:\"73ca51ebf47064dd4476db46fb8b497c9a06aa1370170dde805ad262456b283f\" id:\"73ca51ebf47064dd4476db46fb8b497c9a06aa1370170dde805ad262456b283f\" pid:4653 exited_at:{seconds:1765561071 nanos:311724355}" Dec 12 17:37:51.315031 containerd[1539]: time="2025-12-12T17:37:51.315004340Z" level=info msg="StartContainer for \"73ca51ebf47064dd4476db46fb8b497c9a06aa1370170dde805ad262456b283f\" returns successfully" Dec 12 17:37:51.331491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73ca51ebf47064dd4476db46fb8b497c9a06aa1370170dde805ad262456b283f-rootfs.mount: Deactivated successfully. Dec 12 17:37:52.262347 containerd[1539]: time="2025-12-12T17:37:52.262306619Z" level=info msg="CreateContainer within sandbox \"9368d80e1777d108e4c89d9b27576de5ce49e5ac2ed63b63a3f4324b1b0d22e7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 17:37:52.274357 containerd[1539]: time="2025-12-12T17:37:52.273549622Z" level=info msg="Container 3ae33ca5c2993a932f44be5ebb7d0bf030c94872b5acc38bb49ac2c125a80886: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:37:52.282654 containerd[1539]: time="2025-12-12T17:37:52.282591049Z" level=info msg="CreateContainer within sandbox \"9368d80e1777d108e4c89d9b27576de5ce49e5ac2ed63b63a3f4324b1b0d22e7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3ae33ca5c2993a932f44be5ebb7d0bf030c94872b5acc38bb49ac2c125a80886\"" Dec 12 17:37:52.283334 containerd[1539]: time="2025-12-12T17:37:52.283308294Z" level=info msg="StartContainer for \"3ae33ca5c2993a932f44be5ebb7d0bf030c94872b5acc38bb49ac2c125a80886\"" Dec 12 17:37:52.285007 containerd[1539]: time="2025-12-12T17:37:52.284971507Z" level=info msg="connecting to shim 3ae33ca5c2993a932f44be5ebb7d0bf030c94872b5acc38bb49ac2c125a80886" address="unix:///run/containerd/s/e783b23c7b1cd78ffaab0f7474405fda3a2df0dbe10c0e2ee38af4d6857c5f88" protocol=ttrpc version=3 Dec 12 17:37:52.325440 systemd[1]: Started cri-containerd-3ae33ca5c2993a932f44be5ebb7d0bf030c94872b5acc38bb49ac2c125a80886.scope - libcontainer container 3ae33ca5c2993a932f44be5ebb7d0bf030c94872b5acc38bb49ac2c125a80886. Dec 12 17:37:52.363183 containerd[1539]: time="2025-12-12T17:37:52.363142845Z" level=info msg="StartContainer for \"3ae33ca5c2993a932f44be5ebb7d0bf030c94872b5acc38bb49ac2c125a80886\" returns successfully" Dec 12 17:37:52.631304 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 12 17:37:55.576858 systemd-networkd[1444]: lxc_health: Link UP Dec 12 17:37:55.577069 systemd-networkd[1444]: lxc_health: Gained carrier Dec 12 17:37:56.144180 kubelet[2679]: I1212 17:37:56.143109 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c296p" podStartSLOduration=9.143093681 podStartE2EDuration="9.143093681s" podCreationTimestamp="2025-12-12 17:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:37:53.277844402 +0000 UTC m=+80.375419062" watchObservedRunningTime="2025-12-12 17:37:56.143093681 +0000 UTC m=+83.240668301" Dec 12 17:37:56.943453 systemd-networkd[1444]: lxc_health: Gained IPv6LL Dec 12 17:38:00.824937 sshd[4452]: Connection closed by 10.0.0.1 port 54898 Dec 12 17:38:00.825507 sshd-session[4445]: pam_unix(sshd:session): session closed for user core Dec 12 17:38:00.829267 systemd[1]: sshd@24-10.0.0.78:22-10.0.0.1:54898.service: Deactivated successfully. Dec 12 17:38:00.831075 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 17:38:00.831758 systemd-logind[1512]: Session 25 logged out. Waiting for processes to exit. Dec 12 17:38:00.832827 systemd-logind[1512]: Removed session 25.