Dec 13 23:19:40.217039 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 23:19:40.217062 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Sat Dec 13 21:04:10 -00 2025 Dec 13 23:19:40.217070 kernel: KASLR enabled Dec 13 23:19:40.217076 kernel: efi: EFI v2.7 by EDK II Dec 13 23:19:40.217082 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Dec 13 23:19:40.217087 kernel: random: crng init done Dec 13 23:19:40.217094 kernel: secureboot: Secure boot disabled Dec 13 23:19:40.217100 kernel: ACPI: Early table checksum verification disabled Dec 13 23:19:40.217108 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Dec 13 23:19:40.217114 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 23:19:40.217120 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 23:19:40.217126 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 23:19:40.217132 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 23:19:40.217138 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 23:19:40.217147 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 23:19:40.217153 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 23:19:40.217159 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 23:19:40.217166 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 23:19:40.217172 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 23:19:40.217178 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 23:19:40.217185 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 13 23:19:40.217191 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 23:19:40.217199 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Dec 13 23:19:40.217205 kernel: Zone ranges: Dec 13 23:19:40.217211 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 23:19:40.217218 kernel: DMA32 empty Dec 13 23:19:40.217224 kernel: Normal empty Dec 13 23:19:40.217230 kernel: Device empty Dec 13 23:19:40.217236 kernel: Movable zone start for each node Dec 13 23:19:40.217243 kernel: Early memory node ranges Dec 13 23:19:40.217249 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Dec 13 23:19:40.217255 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Dec 13 23:19:40.217262 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Dec 13 23:19:40.217268 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Dec 13 23:19:40.217276 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Dec 13 23:19:40.217282 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Dec 13 23:19:40.217288 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Dec 13 23:19:40.217295 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Dec 13 23:19:40.217301 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Dec 13 23:19:40.217307 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 13 23:19:40.217317 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 13 23:19:40.217324 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 13 23:19:40.217331 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 13 23:19:40.217337 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 23:19:40.217344 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 23:19:40.217351 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Dec 13 23:19:40.217358 kernel: psci: probing for conduit method from ACPI. Dec 13 23:19:40.217365 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 23:19:40.217373 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 23:19:40.217379 kernel: psci: Trusted OS migration not required Dec 13 23:19:40.217386 kernel: psci: SMC Calling Convention v1.1 Dec 13 23:19:40.217393 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 23:19:40.217400 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 13 23:19:40.217407 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 13 23:19:40.217414 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 23:19:40.217420 kernel: Detected PIPT I-cache on CPU0 Dec 13 23:19:40.217427 kernel: CPU features: detected: GIC system register CPU interface Dec 13 23:19:40.217434 kernel: CPU features: detected: Spectre-v4 Dec 13 23:19:40.217441 kernel: CPU features: detected: Spectre-BHB Dec 13 23:19:40.217449 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 23:19:40.217455 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 23:19:40.217462 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 23:19:40.217469 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 23:19:40.217476 kernel: alternatives: applying boot alternatives Dec 13 23:19:40.217493 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=44c63db9fd88171f565600c90d4cdf8b05fba369ef3a382917a5104525765913 Dec 13 23:19:40.217501 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 23:19:40.217508 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 23:19:40.217515 kernel: Fallback order for Node 0: 0 Dec 13 23:19:40.217521 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 13 23:19:40.217530 kernel: Policy zone: DMA Dec 13 23:19:40.217537 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 23:19:40.217544 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 13 23:19:40.217551 kernel: software IO TLB: area num 4. Dec 13 23:19:40.217557 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 13 23:19:40.217564 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Dec 13 23:19:40.217571 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 23:19:40.217578 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 23:19:40.217585 kernel: rcu: RCU event tracing is enabled. Dec 13 23:19:40.217592 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 23:19:40.217599 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 23:19:40.217608 kernel: Tracing variant of Tasks RCU enabled. Dec 13 23:19:40.217615 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 23:19:40.217621 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 23:19:40.217628 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 23:19:40.217635 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 23:19:40.217642 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 23:19:40.217649 kernel: GICv3: 256 SPIs implemented Dec 13 23:19:40.217655 kernel: GICv3: 0 Extended SPIs implemented Dec 13 23:19:40.217662 kernel: Root IRQ handler: gic_handle_irq Dec 13 23:19:40.217669 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 23:19:40.217676 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 13 23:19:40.217684 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 23:19:40.217690 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 23:19:40.217697 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 13 23:19:40.217704 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 13 23:19:40.217711 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 13 23:19:40.217718 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 13 23:19:40.217724 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 23:19:40.217731 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 23:19:40.217738 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 23:19:40.217745 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 23:19:40.217752 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 23:19:40.217760 kernel: arm-pv: using stolen time PV Dec 13 23:19:40.217767 kernel: Console: colour dummy device 80x25 Dec 13 23:19:40.217774 kernel: ACPI: Core revision 20240827 Dec 13 23:19:40.217782 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 23:19:40.217789 kernel: pid_max: default: 32768 minimum: 301 Dec 13 23:19:40.217796 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 13 23:19:40.217803 kernel: landlock: Up and running. Dec 13 23:19:40.217810 kernel: SELinux: Initializing. Dec 13 23:19:40.217818 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 23:19:40.217825 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 23:19:40.217833 kernel: rcu: Hierarchical SRCU implementation. Dec 13 23:19:40.217840 kernel: rcu: Max phase no-delay instances is 400. Dec 13 23:19:40.217847 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 13 23:19:40.217854 kernel: Remapping and enabling EFI services. Dec 13 23:19:40.217861 kernel: smp: Bringing up secondary CPUs ... Dec 13 23:19:40.217869 kernel: Detected PIPT I-cache on CPU1 Dec 13 23:19:40.217881 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 23:19:40.217889 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 13 23:19:40.217897 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 23:19:40.217904 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 23:19:40.217912 kernel: Detected PIPT I-cache on CPU2 Dec 13 23:19:40.217919 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 23:19:40.217928 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 13 23:19:40.217936 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 23:19:40.217943 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 23:19:40.217950 kernel: Detected PIPT I-cache on CPU3 Dec 13 23:19:40.217972 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 23:19:40.217980 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 13 23:19:40.217988 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 23:19:40.217997 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 23:19:40.218004 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 23:19:40.218012 kernel: SMP: Total of 4 processors activated. Dec 13 23:19:40.218019 kernel: CPU: All CPU(s) started at EL1 Dec 13 23:19:40.218026 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 23:19:40.218034 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 23:19:40.218041 kernel: CPU features: detected: Common not Private translations Dec 13 23:19:40.218050 kernel: CPU features: detected: CRC32 instructions Dec 13 23:19:40.218057 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 23:19:40.218065 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 23:19:40.218072 kernel: CPU features: detected: LSE atomic instructions Dec 13 23:19:40.218079 kernel: CPU features: detected: Privileged Access Never Dec 13 23:19:40.218087 kernel: CPU features: detected: RAS Extension Support Dec 13 23:19:40.218094 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 23:19:40.218102 kernel: alternatives: applying system-wide alternatives Dec 13 23:19:40.218110 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 13 23:19:40.218118 kernel: Memory: 2450848K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12480K init, 1038K bss, 99104K reserved, 16384K cma-reserved) Dec 13 23:19:40.218125 kernel: devtmpfs: initialized Dec 13 23:19:40.218133 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 23:19:40.218140 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 23:19:40.218148 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 23:19:40.218155 kernel: 0 pages in range for non-PLT usage Dec 13 23:19:40.218164 kernel: 515168 pages in range for PLT usage Dec 13 23:19:40.218171 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 23:19:40.218179 kernel: SMBIOS 3.0.0 present. Dec 13 23:19:40.218186 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 13 23:19:40.218193 kernel: DMI: Memory slots populated: 1/1 Dec 13 23:19:40.218200 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 23:19:40.218208 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 23:19:40.218217 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 23:19:40.218224 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 23:19:40.218232 kernel: audit: initializing netlink subsys (disabled) Dec 13 23:19:40.218240 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Dec 13 23:19:40.218247 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 23:19:40.218254 kernel: cpuidle: using governor menu Dec 13 23:19:40.218262 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 23:19:40.218270 kernel: ASID allocator initialised with 32768 entries Dec 13 23:19:40.218278 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 23:19:40.218285 kernel: Serial: AMBA PL011 UART driver Dec 13 23:19:40.218293 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 23:19:40.218300 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 23:19:40.218310 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 23:19:40.218318 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 23:19:40.218326 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 23:19:40.218336 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 23:19:40.218344 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 23:19:40.218351 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 23:19:40.218358 kernel: ACPI: Added _OSI(Module Device) Dec 13 23:19:40.218366 kernel: ACPI: Added _OSI(Processor Device) Dec 13 23:19:40.218373 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 23:19:40.218380 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 23:19:40.218389 kernel: ACPI: Interpreter enabled Dec 13 23:19:40.218397 kernel: ACPI: Using GIC for interrupt routing Dec 13 23:19:40.218404 kernel: ACPI: MCFG table detected, 1 entries Dec 13 23:19:40.218411 kernel: ACPI: CPU0 has been hot-added Dec 13 23:19:40.218418 kernel: ACPI: CPU1 has been hot-added Dec 13 23:19:40.218426 kernel: ACPI: CPU2 has been hot-added Dec 13 23:19:40.218433 kernel: ACPI: CPU3 has been hot-added Dec 13 23:19:40.218441 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 23:19:40.218449 kernel: printk: legacy console [ttyAMA0] enabled Dec 13 23:19:40.218456 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 23:19:40.218615 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 23:19:40.218705 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 23:19:40.218785 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 23:19:40.218866 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 23:19:40.218944 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 23:19:40.218965 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 23:19:40.218974 kernel: PCI host bridge to bus 0000:00 Dec 13 23:19:40.219067 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 23:19:40.219141 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 23:19:40.219215 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 23:19:40.219287 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 23:19:40.219380 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 13 23:19:40.219469 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 13 23:19:40.219567 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 13 23:19:40.219651 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 13 23:19:40.219839 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 23:19:40.219930 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 13 23:19:40.220037 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 13 23:19:40.220118 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 13 23:19:40.220260 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 23:19:40.220440 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 23:19:40.220609 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 23:19:40.220622 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 23:19:40.220663 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 23:19:40.220674 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 23:19:40.220682 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 23:19:40.220690 kernel: iommu: Default domain type: Translated Dec 13 23:19:40.220702 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 23:19:40.220709 kernel: efivars: Registered efivars operations Dec 13 23:19:40.220717 kernel: vgaarb: loaded Dec 13 23:19:40.220755 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 23:19:40.220763 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 23:19:40.220771 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 23:19:40.220778 kernel: pnp: PnP ACPI init Dec 13 23:19:40.221174 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 23:19:40.221192 kernel: pnp: PnP ACPI: found 1 devices Dec 13 23:19:40.221200 kernel: NET: Registered PF_INET protocol family Dec 13 23:19:40.221208 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 23:19:40.221216 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 23:19:40.221224 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 23:19:40.221232 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 23:19:40.221247 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 23:19:40.221255 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 23:19:40.221262 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 23:19:40.221270 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 23:19:40.221474 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 23:19:40.221492 kernel: PCI: CLS 0 bytes, default 64 Dec 13 23:19:40.221500 kernel: kvm [1]: HYP mode not available Dec 13 23:19:40.221514 kernel: Initialise system trusted keyrings Dec 13 23:19:40.221521 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 23:19:40.221529 kernel: Key type asymmetric registered Dec 13 23:19:40.221536 kernel: Asymmetric key parser 'x509' registered Dec 13 23:19:40.221544 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 23:19:40.221551 kernel: io scheduler mq-deadline registered Dec 13 23:19:40.221559 kernel: io scheduler kyber registered Dec 13 23:19:40.221568 kernel: io scheduler bfq registered Dec 13 23:19:40.221576 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 23:19:40.221583 kernel: ACPI: button: Power Button [PWRB] Dec 13 23:19:40.221592 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 23:19:40.221722 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 23:19:40.221734 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 23:19:40.221742 kernel: thunder_xcv, ver 1.0 Dec 13 23:19:40.221751 kernel: thunder_bgx, ver 1.0 Dec 13 23:19:40.221759 kernel: nicpf, ver 1.0 Dec 13 23:19:40.221766 kernel: nicvf, ver 1.0 Dec 13 23:19:40.221858 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 23:19:40.221935 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-13T23:19:39 UTC (1765667979) Dec 13 23:19:40.221945 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 23:19:40.221973 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 13 23:19:40.221982 kernel: watchdog: NMI not fully supported Dec 13 23:19:40.221989 kernel: watchdog: Hard watchdog permanently disabled Dec 13 23:19:40.221996 kernel: NET: Registered PF_INET6 protocol family Dec 13 23:19:40.222004 kernel: Segment Routing with IPv6 Dec 13 23:19:40.222012 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 23:19:40.222019 kernel: NET: Registered PF_PACKET protocol family Dec 13 23:19:40.222027 kernel: Key type dns_resolver registered Dec 13 23:19:40.222036 kernel: registered taskstats version 1 Dec 13 23:19:40.222043 kernel: Loading compiled-in X.509 certificates Dec 13 23:19:40.222051 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: d89c978154dbb01b4a4598f2db878f2ea4aca29d' Dec 13 23:19:40.222058 kernel: Demotion targets for Node 0: null Dec 13 23:19:40.222066 kernel: Key type .fscrypt registered Dec 13 23:19:40.222074 kernel: Key type fscrypt-provisioning registered Dec 13 23:19:40.222081 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 23:19:40.222090 kernel: ima: Allocated hash algorithm: sha1 Dec 13 23:19:40.222097 kernel: ima: No architecture policies found Dec 13 23:19:40.222105 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 23:19:40.222112 kernel: clk: Disabling unused clocks Dec 13 23:19:40.222120 kernel: PM: genpd: Disabling unused power domains Dec 13 23:19:40.222127 kernel: Freeing unused kernel memory: 12480K Dec 13 23:19:40.222134 kernel: Run /init as init process Dec 13 23:19:40.222143 kernel: with arguments: Dec 13 23:19:40.222151 kernel: /init Dec 13 23:19:40.222159 kernel: with environment: Dec 13 23:19:40.222166 kernel: HOME=/ Dec 13 23:19:40.222173 kernel: TERM=linux Dec 13 23:19:40.222280 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 13 23:19:40.222360 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 13 23:19:40.222372 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 23:19:40.222380 kernel: GPT:16515071 != 27000831 Dec 13 23:19:40.222387 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 23:19:40.222395 kernel: GPT:16515071 != 27000831 Dec 13 23:19:40.222402 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 23:19:40.222409 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 23:19:40.222418 kernel: SCSI subsystem initialized Dec 13 23:19:40.222426 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 23:19:40.222434 kernel: device-mapper: uevent: version 1.0.3 Dec 13 23:19:40.222442 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 13 23:19:40.222449 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 13 23:19:40.222456 kernel: raid6: neonx8 gen() 15602 MB/s Dec 13 23:19:40.222464 kernel: raid6: neonx4 gen() 15197 MB/s Dec 13 23:19:40.222472 kernel: raid6: neonx2 gen() 13031 MB/s Dec 13 23:19:40.222554 kernel: raid6: neonx1 gen() 10029 MB/s Dec 13 23:19:40.222564 kernel: raid6: int64x8 gen() 6719 MB/s Dec 13 23:19:40.222572 kernel: raid6: int64x4 gen() 7066 MB/s Dec 13 23:19:40.222579 kernel: raid6: int64x2 gen() 6052 MB/s Dec 13 23:19:40.222587 kernel: raid6: int64x1 gen() 5009 MB/s Dec 13 23:19:40.222594 kernel: raid6: using algorithm neonx8 gen() 15602 MB/s Dec 13 23:19:40.222605 kernel: raid6: .... xor() 11789 MB/s, rmw enabled Dec 13 23:19:40.222613 kernel: raid6: using neon recovery algorithm Dec 13 23:19:40.222620 kernel: xor: measuring software checksum speed Dec 13 23:19:40.222628 kernel: 8regs : 19492 MB/sec Dec 13 23:19:40.222635 kernel: 32regs : 21042 MB/sec Dec 13 23:19:40.222643 kernel: arm64_neon : 25045 MB/sec Dec 13 23:19:40.222650 kernel: xor: using function: arm64_neon (25045 MB/sec) Dec 13 23:19:40.222657 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 23:19:40.222667 kernel: BTRFS: device fsid a1686a6f-a50a-4e68-84e0-ea41bcdb127c devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (204) Dec 13 23:19:40.222674 kernel: BTRFS info (device dm-0): first mount of filesystem a1686a6f-a50a-4e68-84e0-ea41bcdb127c Dec 13 23:19:40.222682 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 23:19:40.222690 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 23:19:40.222698 kernel: BTRFS info (device dm-0): enabling free space tree Dec 13 23:19:40.222705 kernel: loop: module loaded Dec 13 23:19:40.222712 kernel: loop0: detected capacity change from 0 to 91832 Dec 13 23:19:40.222721 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 23:19:40.222730 systemd[1]: Successfully made /usr/ read-only. Dec 13 23:19:40.222740 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 13 23:19:40.222748 systemd[1]: Detected virtualization kvm. Dec 13 23:19:40.222756 systemd[1]: Detected architecture arm64. Dec 13 23:19:40.222765 systemd[1]: Running in initrd. Dec 13 23:19:40.222773 systemd[1]: No hostname configured, using default hostname. Dec 13 23:19:40.222781 systemd[1]: Hostname set to . Dec 13 23:19:40.222789 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 13 23:19:40.222797 systemd[1]: Queued start job for default target initrd.target. Dec 13 23:19:40.222805 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 13 23:19:40.222813 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 23:19:40.222822 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 23:19:40.222831 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 23:19:40.222839 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 23:19:40.222848 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 23:19:40.222856 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 23:19:40.222865 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 23:19:40.222873 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 23:19:40.222881 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 13 23:19:40.222889 systemd[1]: Reached target paths.target - Path Units. Dec 13 23:19:40.222897 systemd[1]: Reached target slices.target - Slice Units. Dec 13 23:19:40.222905 systemd[1]: Reached target swap.target - Swaps. Dec 13 23:19:40.222913 systemd[1]: Reached target timers.target - Timer Units. Dec 13 23:19:40.222922 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 23:19:40.222930 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 23:19:40.222938 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 13 23:19:40.222947 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 23:19:40.222978 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 13 23:19:40.222989 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 23:19:40.223001 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 23:19:40.223010 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 23:19:40.223018 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 23:19:40.223027 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 23:19:40.223035 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 23:19:40.223044 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 23:19:40.223054 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 23:19:40.223062 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 13 23:19:40.223071 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 23:19:40.223079 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 23:19:40.223087 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 23:19:40.223097 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 23:19:40.223106 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 23:19:40.223114 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 23:19:40.223123 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 23:19:40.223132 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 23:19:40.223166 systemd-journald[347]: Collecting audit messages is enabled. Dec 13 23:19:40.223186 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 23:19:40.223194 kernel: Bridge firewalling registered Dec 13 23:19:40.223204 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 23:19:40.223214 systemd-journald[347]: Journal started Dec 13 23:19:40.223232 systemd-journald[347]: Runtime Journal (/run/log/journal/a049f54414764548a79b4a2a70bc987a) is 6M, max 48.5M, 42.4M free. Dec 13 23:19:40.221599 systemd-modules-load[348]: Inserted module 'br_netfilter' Dec 13 23:19:40.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.227980 kernel: audit: type=1130 audit(1765667980.224:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.228001 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 23:19:40.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.231258 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 23:19:40.236047 kernel: audit: type=1130 audit(1765667980.228:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.236071 kernel: audit: type=1130 audit(1765667980.232:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.236082 kernel: audit: type=1130 audit(1765667980.235:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.235094 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 23:19:40.238836 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 23:19:40.240643 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 23:19:40.242294 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 23:19:40.252495 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 23:19:40.260713 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 23:19:40.267288 kernel: audit: type=1130 audit(1765667980.260:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.264856 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 23:19:40.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.266783 systemd-tmpfiles[371]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 13 23:19:40.272241 kernel: audit: type=1130 audit(1765667980.267:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.272261 kernel: audit: type=1334 audit(1765667980.272:8): prog-id=6 op=LOAD Dec 13 23:19:40.272000 audit: BPF prog-id=6 op=LOAD Dec 13 23:19:40.272722 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 23:19:40.275044 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 23:19:40.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.280009 kernel: audit: type=1130 audit(1765667980.275:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.282073 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 23:19:40.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.285052 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 23:19:40.287794 kernel: audit: type=1130 audit(1765667980.282:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.298705 dracut-cmdline[389]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=44c63db9fd88171f565600c90d4cdf8b05fba369ef3a382917a5104525765913 Dec 13 23:19:40.319679 systemd-resolved[384]: Positive Trust Anchors: Dec 13 23:19:40.319697 systemd-resolved[384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 23:19:40.319700 systemd-resolved[384]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 13 23:19:40.319736 systemd-resolved[384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 23:19:40.345299 systemd-resolved[384]: Defaulting to hostname 'linux'. Dec 13 23:19:40.346288 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 23:19:40.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.348090 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 23:19:40.369984 kernel: Loading iSCSI transport class v2.0-870. Dec 13 23:19:40.377982 kernel: iscsi: registered transport (tcp) Dec 13 23:19:40.391997 kernel: iscsi: registered transport (qla4xxx) Dec 13 23:19:40.392034 kernel: QLogic iSCSI HBA Driver Dec 13 23:19:40.411215 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 23:19:40.433113 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 23:19:40.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.434420 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 23:19:40.479053 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 23:19:40.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.480790 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 23:19:40.482263 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 23:19:40.511727 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 23:19:40.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.512000 audit: BPF prog-id=7 op=LOAD Dec 13 23:19:40.512000 audit: BPF prog-id=8 op=LOAD Dec 13 23:19:40.513946 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 23:19:40.539128 systemd-udevd[628]: Using default interface naming scheme 'v257'. Dec 13 23:19:40.546779 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 23:19:40.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.550716 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 23:19:40.569152 dracut-pre-trigger[696]: rd.md=0: removing MD RAID activation Dec 13 23:19:40.577623 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 23:19:40.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.580000 audit: BPF prog-id=9 op=LOAD Dec 13 23:19:40.581730 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 23:19:40.595093 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 23:19:40.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.596929 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 23:19:40.627209 systemd-networkd[754]: lo: Link UP Dec 13 23:19:40.627222 systemd-networkd[754]: lo: Gained carrier Dec 13 23:19:40.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.627642 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 23:19:40.628858 systemd[1]: Reached target network.target - Network. Dec 13 23:19:40.645996 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 23:19:40.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.649350 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 23:19:40.693029 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 23:19:40.704662 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 23:19:40.712758 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 23:19:40.719043 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 23:19:40.720807 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 23:19:40.735475 disk-uuid[800]: Primary Header is updated. Dec 13 23:19:40.735475 disk-uuid[800]: Secondary Entries is updated. Dec 13 23:19:40.735475 disk-uuid[800]: Secondary Header is updated. Dec 13 23:19:40.741855 systemd-networkd[754]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 23:19:40.741869 systemd-networkd[754]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 23:19:40.743228 systemd-networkd[754]: eth0: Link UP Dec 13 23:19:40.743435 systemd-networkd[754]: eth0: Gained carrier Dec 13 23:19:40.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.743446 systemd-networkd[754]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 23:19:40.745645 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 23:19:40.745748 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 23:19:40.750687 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 23:19:40.754040 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 23:19:40.759005 systemd-networkd[754]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 23:19:40.793283 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 23:19:40.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.799440 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 23:19:40.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:40.800830 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 23:19:40.802034 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 23:19:40.803777 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 23:19:40.806325 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 23:19:40.840889 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 23:19:40.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:41.766819 disk-uuid[803]: Warning: The kernel is still using the old partition table. Dec 13 23:19:41.766819 disk-uuid[803]: The new table will be used at the next reboot or after you Dec 13 23:19:41.766819 disk-uuid[803]: run partprobe(8) or kpartx(8) Dec 13 23:19:41.766819 disk-uuid[803]: The operation has completed successfully. Dec 13 23:19:41.775006 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 23:19:41.775876 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 23:19:41.777809 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 23:19:41.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:41.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:41.815122 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (832) Dec 13 23:19:41.815156 kernel: BTRFS info (device vda6): first mount of filesystem 76f8ce4f-b00d-437a-82ef-0e2eb08be73d Dec 13 23:19:41.815167 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 23:19:41.818420 kernel: BTRFS info (device vda6): turning on async discard Dec 13 23:19:41.818464 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 23:19:41.823994 kernel: BTRFS info (device vda6): last unmount of filesystem 76f8ce4f-b00d-437a-82ef-0e2eb08be73d Dec 13 23:19:41.824078 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 23:19:41.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:41.826122 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 23:19:41.917782 ignition[851]: Ignition 2.24.0 Dec 13 23:19:41.917797 ignition[851]: Stage: fetch-offline Dec 13 23:19:41.917833 ignition[851]: no configs at "/usr/lib/ignition/base.d" Dec 13 23:19:41.917844 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 23:19:41.918006 ignition[851]: parsed url from cmdline: "" Dec 13 23:19:41.918012 ignition[851]: no config URL provided Dec 13 23:19:41.918017 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 23:19:41.918025 ignition[851]: no config at "/usr/lib/ignition/user.ign" Dec 13 23:19:41.918061 ignition[851]: op(1): [started] loading QEMU firmware config module Dec 13 23:19:41.918066 ignition[851]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 23:19:41.923942 ignition[851]: op(1): [finished] loading QEMU firmware config module Dec 13 23:19:41.945557 ignition[851]: parsing config with SHA512: c7612be77a171973fd5ce60d7286b69c46b7c922ceec0b4c0e7bbe011ef70ae5e76b286733b426e3859171a1bc1b15ae4120182dc5645b3c54deb1a6c044d6de Dec 13 23:19:41.950348 unknown[851]: fetched base config from "system" Dec 13 23:19:41.950360 unknown[851]: fetched user config from "qemu" Dec 13 23:19:41.950724 ignition[851]: fetch-offline: fetch-offline passed Dec 13 23:19:41.950776 ignition[851]: Ignition finished successfully Dec 13 23:19:41.952423 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 23:19:41.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:41.954119 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 23:19:41.954854 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 23:19:41.977870 ignition[862]: Ignition 2.24.0 Dec 13 23:19:41.977888 ignition[862]: Stage: kargs Dec 13 23:19:41.978040 ignition[862]: no configs at "/usr/lib/ignition/base.d" Dec 13 23:19:41.978048 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 23:19:41.978770 ignition[862]: kargs: kargs passed Dec 13 23:19:41.980663 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 23:19:41.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:41.978809 ignition[862]: Ignition finished successfully Dec 13 23:19:41.982866 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 23:19:42.003661 ignition[869]: Ignition 2.24.0 Dec 13 23:19:42.003676 ignition[869]: Stage: disks Dec 13 23:19:42.003820 ignition[869]: no configs at "/usr/lib/ignition/base.d" Dec 13 23:19:42.003828 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 23:19:42.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:42.006591 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 23:19:42.004686 ignition[869]: disks: disks passed Dec 13 23:19:42.007752 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 23:19:42.004733 ignition[869]: Ignition finished successfully Dec 13 23:19:42.009408 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 23:19:42.010942 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 23:19:42.012283 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 23:19:42.013747 systemd[1]: Reached target basic.target - Basic System. Dec 13 23:19:42.015812 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 23:19:42.044123 systemd-fsck[878]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 13 23:19:42.049039 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 23:19:42.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:42.051722 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 23:19:42.112996 kernel: EXT4-fs (vda9): mounted filesystem b02592d5-55bb-4524-99a1-b54eb9e1980a r/w with ordered data mode. Quota mode: none. Dec 13 23:19:42.113070 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 23:19:42.114154 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 23:19:42.116316 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 23:19:42.117754 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 23:19:42.118654 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 23:19:42.118687 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 23:19:42.118711 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 23:19:42.133248 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 23:19:42.135484 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 23:19:42.140026 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Dec 13 23:19:42.140062 kernel: BTRFS info (device vda6): first mount of filesystem 76f8ce4f-b00d-437a-82ef-0e2eb08be73d Dec 13 23:19:42.140078 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 23:19:42.142390 kernel: BTRFS info (device vda6): turning on async discard Dec 13 23:19:42.142421 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 23:19:42.143282 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 23:19:42.241124 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 23:19:42.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:42.243195 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 23:19:42.244662 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 23:19:42.259630 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 23:19:42.261078 kernel: BTRFS info (device vda6): last unmount of filesystem 76f8ce4f-b00d-437a-82ef-0e2eb08be73d Dec 13 23:19:42.270166 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 23:19:42.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:42.278787 ignition[984]: INFO : Ignition 2.24.0 Dec 13 23:19:42.278787 ignition[984]: INFO : Stage: mount Dec 13 23:19:42.280140 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 23:19:42.280140 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 23:19:42.280140 ignition[984]: INFO : mount: mount passed Dec 13 23:19:42.280140 ignition[984]: INFO : Ignition finished successfully Dec 13 23:19:42.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:42.282409 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 23:19:42.284839 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 23:19:42.315457 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 23:19:42.323977 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (995) Dec 13 23:19:42.325981 kernel: BTRFS info (device vda6): first mount of filesystem 76f8ce4f-b00d-437a-82ef-0e2eb08be73d Dec 13 23:19:42.326015 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 23:19:42.328987 kernel: BTRFS info (device vda6): turning on async discard Dec 13 23:19:42.329023 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 23:19:42.329907 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 23:19:42.359303 ignition[1012]: INFO : Ignition 2.24.0 Dec 13 23:19:42.359303 ignition[1012]: INFO : Stage: files Dec 13 23:19:42.360633 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 23:19:42.360633 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 23:19:42.360633 ignition[1012]: DEBUG : files: compiled without relabeling support, skipping Dec 13 23:19:42.363848 ignition[1012]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 23:19:42.363848 ignition[1012]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 23:19:42.366692 ignition[1012]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 23:19:42.366692 ignition[1012]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 23:19:42.366692 ignition[1012]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 23:19:42.365079 systemd-networkd[754]: eth0: Gained IPv6LL Dec 13 23:19:42.370851 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 13 23:19:42.370851 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 13 23:19:42.365702 unknown[1012]: wrote ssh authorized keys file for user: core Dec 13 23:19:42.415086 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 23:19:42.549756 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 13 23:19:42.549756 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 23:19:42.553299 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 23:19:42.553299 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 23:19:42.553299 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 23:19:42.553299 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 23:19:42.553299 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 23:19:42.553299 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 23:19:42.553299 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 23:19:42.553299 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 23:19:42.553299 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 23:19:42.553299 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 13 23:19:42.568377 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 13 23:19:42.568377 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 13 23:19:42.568377 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 13 23:19:42.947641 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 23:19:43.216692 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 13 23:19:43.216692 ignition[1012]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 23:19:43.220074 ignition[1012]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 23:19:43.220074 ignition[1012]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 23:19:43.220074 ignition[1012]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 23:19:43.220074 ignition[1012]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 23:19:43.220074 ignition[1012]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 23:19:43.220074 ignition[1012]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 23:19:43.220074 ignition[1012]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 23:19:43.220074 ignition[1012]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 23:19:43.234959 ignition[1012]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 23:19:43.238147 ignition[1012]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 23:19:43.239356 ignition[1012]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 23:19:43.239356 ignition[1012]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 23:19:43.239356 ignition[1012]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 23:19:43.239356 ignition[1012]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 23:19:43.239356 ignition[1012]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 23:19:43.239356 ignition[1012]: INFO : files: files passed Dec 13 23:19:43.239356 ignition[1012]: INFO : Ignition finished successfully Dec 13 23:19:43.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.240063 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 23:19:43.243169 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 23:19:43.245497 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 23:19:43.258994 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 23:19:43.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.259103 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 23:19:43.263402 initrd-setup-root-after-ignition[1043]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 23:19:43.266315 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 23:19:43.266315 initrd-setup-root-after-ignition[1045]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 23:19:43.269226 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 23:19:43.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.268611 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 23:19:43.270242 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 23:19:43.272613 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 23:19:43.317318 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 23:19:43.317456 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 23:19:43.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.319365 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 23:19:43.320767 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 23:19:43.322458 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 23:19:43.323332 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 23:19:43.338342 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 23:19:43.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.340653 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 23:19:43.358692 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 13 23:19:43.358879 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 23:19:43.360891 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 23:19:43.362823 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 23:19:43.364476 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 23:19:43.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.364593 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 23:19:43.366804 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 23:19:43.368712 systemd[1]: Stopped target basic.target - Basic System. Dec 13 23:19:43.370177 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 23:19:43.371731 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 23:19:43.373517 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 23:19:43.375235 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 13 23:19:43.377003 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 23:19:43.378767 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 23:19:43.380551 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 23:19:43.382313 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 23:19:43.383923 systemd[1]: Stopped target swap.target - Swaps. Dec 13 23:19:43.385390 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 23:19:43.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.385520 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 23:19:43.387747 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 23:19:43.389732 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 23:19:43.391504 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 23:19:43.391627 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 23:19:43.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.393434 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 23:19:43.393561 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 23:19:43.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.395904 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 23:19:43.396024 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 23:19:43.397785 systemd[1]: Stopped target paths.target - Path Units. Dec 13 23:19:43.399046 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 23:19:43.400067 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 23:19:43.401635 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 23:19:43.402864 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 23:19:43.404327 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 23:19:43.404422 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 23:19:43.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.406113 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 23:19:43.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.406190 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 23:19:43.407541 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 13 23:19:43.407610 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 13 23:19:43.408948 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 23:19:43.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.409076 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 23:19:43.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.410511 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 23:19:43.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.410614 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 23:19:43.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.412800 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 23:19:43.414264 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 23:19:43.414387 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 23:19:43.416776 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 23:19:43.417549 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 23:19:43.417665 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 23:19:43.419565 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 23:19:43.419673 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 23:19:43.421305 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 23:19:43.421407 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 23:19:43.426619 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 23:19:43.435145 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 23:19:43.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.444396 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 23:19:43.446359 ignition[1071]: INFO : Ignition 2.24.0 Dec 13 23:19:43.446359 ignition[1071]: INFO : Stage: umount Dec 13 23:19:43.447840 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 23:19:43.447840 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 23:19:43.447840 ignition[1071]: INFO : umount: umount passed Dec 13 23:19:43.447840 ignition[1071]: INFO : Ignition finished successfully Dec 13 23:19:43.448786 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 23:19:43.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.450008 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 23:19:43.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.451749 systemd[1]: Stopped target network.target - Network. Dec 13 23:19:43.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.452925 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 23:19:43.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.452996 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 23:19:43.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.455264 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 23:19:43.455315 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 23:19:43.456773 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 23:19:43.456818 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 23:19:43.458301 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 23:19:43.458344 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 23:19:43.460043 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 23:19:43.461748 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 23:19:43.468152 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 23:19:43.470007 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 23:19:43.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.476000 audit: BPF prog-id=6 op=UNLOAD Dec 13 23:19:43.477508 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 23:19:43.477618 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 23:19:43.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.480000 audit: BPF prog-id=9 op=UNLOAD Dec 13 23:19:43.481257 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 23:19:43.482056 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 23:19:43.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.484034 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 13 23:19:43.485784 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 23:19:43.485843 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 23:19:43.487516 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 23:19:43.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.487568 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 23:19:43.489874 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 23:19:43.491402 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 23:19:43.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.491466 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 23:19:43.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.493082 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 23:19:43.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.493126 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 23:19:43.494588 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 23:19:43.494630 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 23:19:43.496364 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 23:19:43.514194 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 23:19:43.520282 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 23:19:43.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.521646 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 23:19:43.521680 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 23:19:43.523106 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 23:19:43.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.523133 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 23:19:43.524624 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 23:19:43.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.524668 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 23:19:43.526931 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 23:19:43.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.526993 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 23:19:43.529361 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 23:19:43.529411 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 23:19:43.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.532407 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 23:19:43.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.533318 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 13 23:19:43.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.533377 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 23:19:43.534985 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 23:19:43.535030 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 23:19:43.536904 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 23:19:43.536946 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 23:19:43.539117 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 23:19:43.557129 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 23:19:43.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.562277 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 23:19:43.562373 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 23:19:43.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:43.564219 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 23:19:43.566399 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 23:19:43.597669 systemd[1]: Switching root. Dec 13 23:19:43.631868 systemd-journald[347]: Journal stopped Dec 13 23:19:44.395657 systemd-journald[347]: Received SIGTERM from PID 1 (systemd). Dec 13 23:19:44.395709 kernel: kauditd_printk_skb: 69 callbacks suppressed Dec 13 23:19:44.395727 kernel: audit: type=1335 audit(1765667983.649:80): pid=347 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" nl-mcgrp=1 op=disconnect res=1 Dec 13 23:19:44.395740 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 23:19:44.395753 kernel: SELinux: policy capability open_perms=1 Dec 13 23:19:44.395763 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 23:19:44.395773 kernel: SELinux: policy capability always_check_network=0 Dec 13 23:19:44.395782 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 23:19:44.395792 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 23:19:44.395805 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 23:19:44.395820 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 23:19:44.395831 kernel: SELinux: policy capability userspace_initial_context=0 Dec 13 23:19:44.395841 kernel: audit: type=1403 audit(1765667983.828:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 23:19:44.395851 systemd[1]: Successfully loaded SELinux policy in 61.925ms. Dec 13 23:19:44.395867 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.424ms. Dec 13 23:19:44.395879 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 13 23:19:44.395891 systemd[1]: Detected virtualization kvm. Dec 13 23:19:44.395901 systemd[1]: Detected architecture arm64. Dec 13 23:19:44.395913 systemd[1]: Detected first boot. Dec 13 23:19:44.395924 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 13 23:19:44.395934 kernel: audit: type=1334 audit(1765667983.883:82): prog-id=10 op=LOAD Dec 13 23:19:44.395945 kernel: audit: type=1334 audit(1765667983.883:83): prog-id=10 op=UNLOAD Dec 13 23:19:44.395966 kernel: audit: type=1334 audit(1765667983.884:84): prog-id=11 op=LOAD Dec 13 23:19:44.395977 kernel: audit: type=1334 audit(1765667983.884:85): prog-id=11 op=UNLOAD Dec 13 23:19:44.395993 zram_generator::config[1118]: No configuration found. Dec 13 23:19:44.396006 kernel: NET: Registered PF_VSOCK protocol family Dec 13 23:19:44.396031 systemd[1]: Populated /etc with preset unit settings. Dec 13 23:19:44.396048 kernel: audit: type=1334 audit(1765667984.206:86): prog-id=12 op=LOAD Dec 13 23:19:44.396059 kernel: audit: type=1334 audit(1765667984.206:87): prog-id=3 op=UNLOAD Dec 13 23:19:44.396069 kernel: audit: type=1334 audit(1765667984.206:88): prog-id=13 op=LOAD Dec 13 23:19:44.396078 kernel: audit: type=1334 audit(1765667984.206:89): prog-id=14 op=LOAD Dec 13 23:19:44.396089 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 23:19:44.396102 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 23:19:44.396113 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 23:19:44.396125 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 23:19:44.396136 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 23:19:44.396146 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 23:19:44.396157 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 23:19:44.396169 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 23:19:44.396180 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 23:19:44.396190 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 23:19:44.396201 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 23:19:44.396212 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 23:19:44.396223 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 23:19:44.396235 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 23:19:44.396246 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 23:19:44.396257 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 23:19:44.396268 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 23:19:44.396278 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 23:19:44.396300 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 23:19:44.396311 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 23:19:44.396323 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 23:19:44.396334 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 23:19:44.396345 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 23:19:44.396357 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 23:19:44.396369 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 23:19:44.396379 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 23:19:44.396390 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 13 23:19:44.396402 systemd[1]: Reached target slices.target - Slice Units. Dec 13 23:19:44.396413 systemd[1]: Reached target swap.target - Swaps. Dec 13 23:19:44.396424 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 23:19:44.396435 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 23:19:44.396446 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 13 23:19:44.396457 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 13 23:19:44.396474 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 13 23:19:44.396487 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 23:19:44.396498 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 13 23:19:44.396509 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 13 23:19:44.396520 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 23:19:44.396530 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 23:19:44.396541 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 23:19:44.396552 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 23:19:44.396565 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 23:19:44.396576 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 23:19:44.396587 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 23:19:44.396598 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 23:19:44.396608 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 23:19:44.396619 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 23:19:44.396630 systemd[1]: Reached target machines.target - Containers. Dec 13 23:19:44.396642 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 23:19:44.396653 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 23:19:44.396663 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 23:19:44.396674 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 23:19:44.396685 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 23:19:44.396695 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 23:19:44.396706 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 23:19:44.396718 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 23:19:44.396729 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 23:19:44.396740 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 23:19:44.396752 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 23:19:44.396762 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 23:19:44.396773 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 23:19:44.396784 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 23:19:44.396796 kernel: fuse: init (API version 7.41) Dec 13 23:19:44.396807 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 23:19:44.396819 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 23:19:44.396831 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 23:19:44.396842 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 23:19:44.396852 kernel: ACPI: bus type drm_connector registered Dec 13 23:19:44.396866 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 23:19:44.396877 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 13 23:19:44.396888 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 23:19:44.396898 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 23:19:44.396910 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 23:19:44.396921 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 23:19:44.396951 systemd-journald[1190]: Collecting audit messages is enabled. Dec 13 23:19:44.397032 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 23:19:44.397044 systemd-journald[1190]: Journal started Dec 13 23:19:44.397068 systemd-journald[1190]: Runtime Journal (/run/log/journal/a049f54414764548a79b4a2a70bc987a) is 6M, max 48.5M, 42.4M free. Dec 13 23:19:44.274000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 23:19:44.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.358000 audit: BPF prog-id=14 op=UNLOAD Dec 13 23:19:44.358000 audit: BPF prog-id=13 op=UNLOAD Dec 13 23:19:44.359000 audit: BPF prog-id=15 op=LOAD Dec 13 23:19:44.359000 audit: BPF prog-id=16 op=LOAD Dec 13 23:19:44.359000 audit: BPF prog-id=17 op=LOAD Dec 13 23:19:44.393000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 23:19:44.393000 audit[1190]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffd6d87cd0 a2=4000 a3=0 items=0 ppid=1 pid=1190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:19:44.393000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 23:19:44.188637 systemd[1]: Queued start job for default target multi-user.target. Dec 13 23:19:44.207806 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 23:19:44.208919 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 23:19:44.399364 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 23:19:44.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.400283 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 23:19:44.401331 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 23:19:44.403993 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 23:19:44.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.405212 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 23:19:44.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.406484 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 23:19:44.406645 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 23:19:44.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.407882 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 23:19:44.408171 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 23:19:44.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.409325 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 23:19:44.409498 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 23:19:44.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.410627 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 23:19:44.410772 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 23:19:44.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.412224 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 23:19:44.412372 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 23:19:44.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.413519 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 23:19:44.413666 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 23:19:44.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.414874 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 23:19:44.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.416518 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 23:19:44.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.418425 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 23:19:44.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.419844 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 13 23:19:44.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.433262 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 23:19:44.434589 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 13 23:19:44.436599 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 23:19:44.438453 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 23:19:44.439437 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 23:19:44.439479 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 23:19:44.441308 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 13 23:19:44.442901 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 23:19:44.443042 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 23:19:44.447763 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 23:19:44.449657 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 23:19:44.450670 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 23:19:44.451501 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 23:19:44.452592 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 23:19:44.453418 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 23:19:44.457097 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 23:19:44.460365 systemd-journald[1190]: Time spent on flushing to /var/log/journal/a049f54414764548a79b4a2a70bc987a is 17.425ms for 1001 entries. Dec 13 23:19:44.460365 systemd-journald[1190]: System Journal (/var/log/journal/a049f54414764548a79b4a2a70bc987a) is 8M, max 163.5M, 155.5M free. Dec 13 23:19:44.483593 systemd-journald[1190]: Received client request to flush runtime journal. Dec 13 23:19:44.483648 kernel: loop1: detected capacity change from 0 to 161080 Dec 13 23:19:44.483663 kernel: loop1: p1 p2 p3 Dec 13 23:19:44.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.460133 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 23:19:44.464023 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 23:19:44.465950 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 23:19:44.467273 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 23:19:44.468624 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 23:19:44.474866 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 23:19:44.480209 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 13 23:19:44.483075 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 23:19:44.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.495053 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 23:19:44.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.499053 kernel: erofs: (device loop1p1): mounted with root inode @ nid 39. Dec 13 23:19:44.498878 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 23:19:44.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.502000 audit: BPF prog-id=18 op=LOAD Dec 13 23:19:44.502000 audit: BPF prog-id=19 op=LOAD Dec 13 23:19:44.502000 audit: BPF prog-id=20 op=LOAD Dec 13 23:19:44.503862 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 13 23:19:44.504000 audit: BPF prog-id=21 op=LOAD Dec 13 23:19:44.509114 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 23:19:44.511361 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 23:19:44.512000 audit: BPF prog-id=22 op=LOAD Dec 13 23:19:44.512000 audit: BPF prog-id=23 op=LOAD Dec 13 23:19:44.512000 audit: BPF prog-id=24 op=LOAD Dec 13 23:19:44.514855 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 13 23:19:44.515000 audit: BPF prog-id=25 op=LOAD Dec 13 23:19:44.515000 audit: BPF prog-id=26 op=LOAD Dec 13 23:19:44.515000 audit: BPF prog-id=27 op=LOAD Dec 13 23:19:44.518976 kernel: loop2: detected capacity change from 0 to 353272 Dec 13 23:19:44.519981 kernel: loop2: p1 p2 p3 Dec 13 23:19:44.521154 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 23:19:44.523166 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 13 23:19:44.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.531983 kernel: erofs: (device loop2p1): mounted with root inode @ nid 39. Dec 13 23:19:44.541846 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Dec 13 23:19:44.542140 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Dec 13 23:19:44.548051 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 23:19:44.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.552969 kernel: loop3: detected capacity change from 0 to 207008 Dec 13 23:19:44.566295 systemd-nsresourced[1254]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 13 23:19:44.567225 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 13 23:19:44.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.568998 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 23:19:44.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.577099 kernel: loop4: detected capacity change from 0 to 161080 Dec 13 23:19:44.578984 kernel: loop4: p1 p2 p3 Dec 13 23:19:44.592990 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 13 23:19:44.593101 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 13 23:19:44.593124 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Dec 13 23:19:44.594617 kernel: device-mapper: ioctl: error adding target to table Dec 13 23:19:44.594548 (sd-merge)[1272]: device-mapper: reload ioctl on cf827620bc7ad537f83bb2a823378974b3cc077c207d7b04c642a58e7bc0ec99-verity (253:1) failed: Invalid argument Dec 13 23:19:44.601988 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 13 23:19:44.617311 systemd-oomd[1251]: No swap; memory pressure usage will be degraded Dec 13 23:19:44.617786 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 13 23:19:44.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.626548 systemd-resolved[1252]: Positive Trust Anchors: Dec 13 23:19:44.626568 systemd-resolved[1252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 23:19:44.626572 systemd-resolved[1252]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 13 23:19:44.626604 systemd-resolved[1252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 23:19:44.630745 systemd-resolved[1252]: Defaulting to hostname 'linux'. Dec 13 23:19:44.632252 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 23:19:44.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.633378 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 23:19:44.857077 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 23:19:44.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.857000 audit: BPF prog-id=8 op=UNLOAD Dec 13 23:19:44.857000 audit: BPF prog-id=7 op=UNLOAD Dec 13 23:19:44.857000 audit: BPF prog-id=28 op=LOAD Dec 13 23:19:44.857000 audit: BPF prog-id=29 op=LOAD Dec 13 23:19:44.859586 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 23:19:44.895657 systemd-udevd[1280]: Using default interface naming scheme 'v257'. Dec 13 23:19:44.910674 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 23:19:44.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.912000 audit: BPF prog-id=30 op=LOAD Dec 13 23:19:44.914110 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 23:19:44.987402 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 23:19:44.990527 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 23:19:44.992971 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 23:19:44.994843 systemd-networkd[1290]: lo: Link UP Dec 13 23:19:44.994851 systemd-networkd[1290]: lo: Gained carrier Dec 13 23:19:44.995716 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 23:19:44.996121 systemd-networkd[1290]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 23:19:44.996131 systemd-networkd[1290]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 23:19:44.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:44.996796 systemd[1]: Reached target network.target - Network. Dec 13 23:19:44.996905 systemd-networkd[1290]: eth0: Link UP Dec 13 23:19:44.997121 systemd-networkd[1290]: eth0: Gained carrier Dec 13 23:19:44.997135 systemd-networkd[1290]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 23:19:45.003386 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 13 23:19:45.007230 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 23:19:45.009014 systemd-networkd[1290]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 23:19:45.019177 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 23:19:45.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:45.029515 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 13 23:19:45.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:45.087118 kernel: erofs: (device dm-1): mounted with root inode @ nid 39. Dec 13 23:19:45.088970 kernel: loop5: detected capacity change from 0 to 353272 Dec 13 23:19:45.090975 kernel: loop5: p1 p2 p3 Dec 13 23:19:45.090305 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 23:19:45.097668 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 13 23:19:45.097707 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 13 23:19:45.097734 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Dec 13 23:19:45.098649 kernel: device-mapper: ioctl: error adding target to table Dec 13 23:19:45.099351 (sd-merge)[1272]: device-mapper: reload ioctl on b35b2492fcca387995ac7cc700425775891a7db9ed46359c680e82ec44f4021d-verity (253:2) failed: Invalid argument Dec 13 23:19:45.105990 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 13 23:19:45.126981 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Dec 13 23:19:45.127131 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 23:19:45.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:45.128969 kernel: loop6: detected capacity change from 0 to 207008 Dec 13 23:19:45.133187 (sd-merge)[1272]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 13 23:19:45.135776 (sd-merge)[1272]: Merged extensions into '/usr'. Dec 13 23:19:45.138703 systemd[1]: Reload requested from client PID 1235 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 23:19:45.138719 systemd[1]: Reloading... Dec 13 23:19:45.184036 zram_generator::config[1374]: No configuration found. Dec 13 23:19:45.351660 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 23:19:45.352098 systemd[1]: Reloading finished in 213 ms. Dec 13 23:19:45.370677 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 23:19:45.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:45.386078 systemd[1]: Starting ensure-sysext.service... Dec 13 23:19:45.387669 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 23:19:45.388000 audit: BPF prog-id=31 op=LOAD Dec 13 23:19:45.388000 audit: BPF prog-id=25 op=UNLOAD Dec 13 23:19:45.388000 audit: BPF prog-id=32 op=LOAD Dec 13 23:19:45.388000 audit: BPF prog-id=33 op=LOAD Dec 13 23:19:45.388000 audit: BPF prog-id=26 op=UNLOAD Dec 13 23:19:45.388000 audit: BPF prog-id=27 op=UNLOAD Dec 13 23:19:45.388000 audit: BPF prog-id=34 op=LOAD Dec 13 23:19:45.388000 audit: BPF prog-id=30 op=UNLOAD Dec 13 23:19:45.389000 audit: BPF prog-id=35 op=LOAD Dec 13 23:19:45.389000 audit: BPF prog-id=22 op=UNLOAD Dec 13 23:19:45.389000 audit: BPF prog-id=36 op=LOAD Dec 13 23:19:45.389000 audit: BPF prog-id=37 op=LOAD Dec 13 23:19:45.389000 audit: BPF prog-id=23 op=UNLOAD Dec 13 23:19:45.389000 audit: BPF prog-id=24 op=UNLOAD Dec 13 23:19:45.390000 audit: BPF prog-id=38 op=LOAD Dec 13 23:19:45.390000 audit: BPF prog-id=15 op=UNLOAD Dec 13 23:19:45.390000 audit: BPF prog-id=39 op=LOAD Dec 13 23:19:45.390000 audit: BPF prog-id=40 op=LOAD Dec 13 23:19:45.390000 audit: BPF prog-id=16 op=UNLOAD Dec 13 23:19:45.390000 audit: BPF prog-id=17 op=UNLOAD Dec 13 23:19:45.391000 audit: BPF prog-id=41 op=LOAD Dec 13 23:19:45.391000 audit: BPF prog-id=18 op=UNLOAD Dec 13 23:19:45.391000 audit: BPF prog-id=42 op=LOAD Dec 13 23:19:45.391000 audit: BPF prog-id=43 op=LOAD Dec 13 23:19:45.391000 audit: BPF prog-id=19 op=UNLOAD Dec 13 23:19:45.391000 audit: BPF prog-id=20 op=UNLOAD Dec 13 23:19:45.392000 audit: BPF prog-id=44 op=LOAD Dec 13 23:19:45.392000 audit: BPF prog-id=21 op=UNLOAD Dec 13 23:19:45.392000 audit: BPF prog-id=45 op=LOAD Dec 13 23:19:45.392000 audit: BPF prog-id=46 op=LOAD Dec 13 23:19:45.392000 audit: BPF prog-id=28 op=UNLOAD Dec 13 23:19:45.392000 audit: BPF prog-id=29 op=UNLOAD Dec 13 23:19:45.398229 systemd[1]: Reload requested from client PID 1409 ('systemctl') (unit ensure-sysext.service)... Dec 13 23:19:45.398243 systemd[1]: Reloading... Dec 13 23:19:45.400641 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 13 23:19:45.400675 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 13 23:19:45.401020 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 23:19:45.401905 systemd-tmpfiles[1410]: ACLs are not supported, ignoring. Dec 13 23:19:45.401993 systemd-tmpfiles[1410]: ACLs are not supported, ignoring. Dec 13 23:19:45.405432 systemd-tmpfiles[1410]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 23:19:45.405446 systemd-tmpfiles[1410]: Skipping /boot Dec 13 23:19:45.411487 systemd-tmpfiles[1410]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 23:19:45.411497 systemd-tmpfiles[1410]: Skipping /boot Dec 13 23:19:45.451985 zram_generator::config[1444]: No configuration found. Dec 13 23:19:45.617409 systemd[1]: Reloading finished in 218 ms. Dec 13 23:19:45.636000 audit: BPF prog-id=47 op=LOAD Dec 13 23:19:45.636000 audit: BPF prog-id=48 op=LOAD Dec 13 23:19:45.636000 audit: BPF prog-id=45 op=UNLOAD Dec 13 23:19:45.636000 audit: BPF prog-id=46 op=UNLOAD Dec 13 23:19:45.636000 audit: BPF prog-id=49 op=LOAD Dec 13 23:19:45.636000 audit: BPF prog-id=31 op=UNLOAD Dec 13 23:19:45.636000 audit: BPF prog-id=50 op=LOAD Dec 13 23:19:45.636000 audit: BPF prog-id=51 op=LOAD Dec 13 23:19:45.637000 audit: BPF prog-id=32 op=UNLOAD Dec 13 23:19:45.637000 audit: BPF prog-id=33 op=UNLOAD Dec 13 23:19:45.637000 audit: BPF prog-id=52 op=LOAD Dec 13 23:19:45.637000 audit: BPF prog-id=38 op=UNLOAD Dec 13 23:19:45.637000 audit: BPF prog-id=53 op=LOAD Dec 13 23:19:45.637000 audit: BPF prog-id=54 op=LOAD Dec 13 23:19:45.637000 audit: BPF prog-id=39 op=UNLOAD Dec 13 23:19:45.637000 audit: BPF prog-id=40 op=UNLOAD Dec 13 23:19:45.639000 audit: BPF prog-id=55 op=LOAD Dec 13 23:19:45.639000 audit: BPF prog-id=41 op=UNLOAD Dec 13 23:19:45.639000 audit: BPF prog-id=56 op=LOAD Dec 13 23:19:45.639000 audit: BPF prog-id=57 op=LOAD Dec 13 23:19:45.639000 audit: BPF prog-id=42 op=UNLOAD Dec 13 23:19:45.639000 audit: BPF prog-id=43 op=UNLOAD Dec 13 23:19:45.643000 audit: BPF prog-id=58 op=LOAD Dec 13 23:19:45.643000 audit: BPF prog-id=34 op=UNLOAD Dec 13 23:19:45.643000 audit: BPF prog-id=59 op=LOAD Dec 13 23:19:45.643000 audit: BPF prog-id=35 op=UNLOAD Dec 13 23:19:45.644000 audit: BPF prog-id=60 op=LOAD Dec 13 23:19:45.657000 audit: BPF prog-id=61 op=LOAD Dec 13 23:19:45.657000 audit: BPF prog-id=36 op=UNLOAD Dec 13 23:19:45.657000 audit: BPF prog-id=37 op=UNLOAD Dec 13 23:19:45.657000 audit: BPF prog-id=62 op=LOAD Dec 13 23:19:45.657000 audit: BPF prog-id=44 op=UNLOAD Dec 13 23:19:45.660916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 23:19:45.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:45.673477 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 23:19:45.675885 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 23:19:45.690244 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 23:19:45.698420 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 23:19:45.700718 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 23:19:45.704734 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 23:19:45.706133 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 23:19:45.707000 audit[1489]: SYSTEM_BOOT pid=1489 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 23:19:45.708737 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 23:19:45.711761 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 23:19:45.714149 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 23:19:45.714349 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 23:19:45.714449 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 23:19:45.717580 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 23:19:45.717805 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 23:19:45.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:45.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:45.720560 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 23:19:45.720745 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 23:19:45.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:45.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:45.728153 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 23:19:45.731530 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 23:19:45.736301 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 23:19:45.737429 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 23:19:45.737612 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 23:19:45.737703 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 23:19:45.740031 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 23:19:45.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:45.741785 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 23:19:45.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:45.743887 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 23:19:45.750380 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 23:19:45.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:45.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:45.755828 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 23:19:45.756019 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 23:19:45.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:45.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:45.757698 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 23:19:45.757860 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 23:19:45.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:45.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 23:19:45.762000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 23:19:45.762000 audit[1513]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd0ba99d0 a2=420 a3=0 items=0 ppid=1479 pid=1513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 23:19:45.762000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 23:19:45.763861 augenrules[1513]: No rules Dec 13 23:19:45.766390 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 23:19:45.766643 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 23:19:45.769000 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 23:19:45.772674 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 23:19:45.774095 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 23:19:45.776167 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 23:19:45.780102 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 23:19:45.791648 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 23:19:45.793397 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 23:19:45.793533 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 23:19:45.793581 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 23:19:45.793741 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 23:19:45.795742 systemd[1]: Finished ensure-sysext.service. Dec 13 23:19:45.797388 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 23:19:45.797616 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 23:19:45.799601 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 23:19:45.799806 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 23:19:45.802641 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 23:19:45.803710 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 23:19:45.805243 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 23:19:45.805518 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 23:19:45.811825 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 23:19:45.812159 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 23:19:45.814180 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 23:19:45.869082 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 23:19:45.869812 systemd-timesyncd[1531]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 23:19:45.869851 systemd-timesyncd[1531]: Initial clock synchronization to Sat 2025-12-13 23:19:45.847631 UTC. Dec 13 23:19:45.871089 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 23:19:45.965786 ldconfig[1481]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 23:19:45.970504 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 23:19:45.972905 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 23:19:46.004066 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 23:19:46.005269 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 23:19:46.008256 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 23:19:46.009312 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 23:19:46.010664 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 23:19:46.011708 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 23:19:46.012821 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 13 23:19:46.014057 systemd-networkd[1290]: eth0: Gained IPv6LL Dec 13 23:19:46.014146 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 13 23:19:46.015075 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 23:19:46.016102 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 23:19:46.016139 systemd[1]: Reached target paths.target - Path Units. Dec 13 23:19:46.016873 systemd[1]: Reached target timers.target - Timer Units. Dec 13 23:19:46.018263 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 23:19:46.020576 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 23:19:46.023338 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 13 23:19:46.024608 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 13 23:19:46.025779 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 13 23:19:46.028944 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 23:19:46.030196 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 13 23:19:46.032023 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 23:19:46.033322 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 23:19:46.035740 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 23:19:46.036716 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 23:19:46.037546 systemd[1]: Reached target basic.target - Basic System. Dec 13 23:19:46.038388 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 23:19:46.038430 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 23:19:46.039515 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 23:19:46.041361 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 23:19:46.043181 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 23:19:46.044789 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 23:19:46.048025 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 23:19:46.050063 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 23:19:46.050933 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 23:19:46.052282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 23:19:46.055042 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 23:19:46.055160 jq[1545]: false Dec 13 23:19:46.058049 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 23:19:46.060034 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 23:19:46.062127 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 23:19:46.064808 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 23:19:46.068348 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 23:19:46.069280 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 23:19:46.069893 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 23:19:46.070582 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 23:19:46.072426 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 23:19:46.077230 extend-filesystems[1546]: Found /dev/vda6 Dec 13 23:19:46.079004 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 23:19:46.088244 extend-filesystems[1546]: Found /dev/vda9 Dec 13 23:19:46.088244 extend-filesystems[1546]: Checking size of /dev/vda9 Dec 13 23:19:46.080427 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 23:19:46.089801 jq[1561]: true Dec 13 23:19:46.080708 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 23:19:46.082561 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 23:19:46.082840 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 23:19:46.095044 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 23:19:46.095445 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 23:19:46.107886 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 23:19:46.115818 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 23:19:46.116165 jq[1583]: true Dec 13 23:19:46.116457 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 23:19:46.121664 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 23:19:46.123443 tar[1572]: linux-arm64/LICENSE Dec 13 23:19:46.123653 tar[1572]: linux-arm64/helm Dec 13 23:19:46.124529 update_engine[1559]: I20251213 23:19:46.124307 1559 main.cc:92] Flatcar Update Engine starting Dec 13 23:19:46.131369 extend-filesystems[1546]: Resized partition /dev/vda9 Dec 13 23:19:46.136819 extend-filesystems[1609]: resize2fs 1.47.3 (8-Jul-2025) Dec 13 23:19:46.143774 dbus-daemon[1543]: [system] SELinux support is enabled Dec 13 23:19:46.145561 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 23:19:46.149926 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 23:19:46.149974 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 23:19:46.151261 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 23:19:46.151282 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 23:19:46.160983 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 13 23:19:46.162888 update_engine[1559]: I20251213 23:19:46.162835 1559 update_check_scheduler.cc:74] Next update check in 4m49s Dec 13 23:19:46.164563 systemd[1]: Started update-engine.service - Update Engine. Dec 13 23:19:46.170593 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 23:19:46.228165 systemd-logind[1556]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 23:19:46.230139 systemd-logind[1556]: New seat seat0. Dec 13 23:19:46.231973 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 23:19:46.238977 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 13 23:19:46.254018 extend-filesystems[1609]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 23:19:46.254018 extend-filesystems[1609]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 23:19:46.254018 extend-filesystems[1609]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 13 23:19:46.258745 extend-filesystems[1546]: Resized filesystem in /dev/vda9 Dec 13 23:19:46.257716 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 23:19:46.261637 bash[1626]: Updated "/home/core/.ssh/authorized_keys" Dec 13 23:19:46.258022 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 23:19:46.262365 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 23:19:46.267059 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 23:19:46.281817 sshd_keygen[1568]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 23:19:46.296115 locksmithd[1617]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 23:19:46.311593 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 23:19:46.316248 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 23:19:46.333376 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 23:19:46.333646 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 23:19:46.336597 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 23:19:46.355545 containerd[1598]: time="2025-12-13T23:19:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 13 23:19:46.356906 containerd[1598]: time="2025-12-13T23:19:46.356844139Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 13 23:19:46.361325 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 23:19:46.364507 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 23:19:46.369346 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 23:19:46.370603 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 23:19:46.382837 containerd[1598]: time="2025-12-13T23:19:46.382782315Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.265µs" Dec 13 23:19:46.382837 containerd[1598]: time="2025-12-13T23:19:46.382828251Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 13 23:19:46.383203 containerd[1598]: time="2025-12-13T23:19:46.382880619Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 13 23:19:46.383203 containerd[1598]: time="2025-12-13T23:19:46.382895358Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 13 23:19:46.383203 containerd[1598]: time="2025-12-13T23:19:46.383124481Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 13 23:19:46.383203 containerd[1598]: time="2025-12-13T23:19:46.383146690Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 13 23:19:46.383293 containerd[1598]: time="2025-12-13T23:19:46.383201854Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 13 23:19:46.383293 containerd[1598]: time="2025-12-13T23:19:46.383214037Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 13 23:19:46.383502 containerd[1598]: time="2025-12-13T23:19:46.383478631Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 13 23:19:46.383535 containerd[1598]: time="2025-12-13T23:19:46.383503277Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 13 23:19:46.383535 containerd[1598]: time="2025-12-13T23:19:46.383515739Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 13 23:19:46.383535 containerd[1598]: time="2025-12-13T23:19:46.383523609Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 13 23:19:46.383744 containerd[1598]: time="2025-12-13T23:19:46.383694452Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 13 23:19:46.383877 containerd[1598]: time="2025-12-13T23:19:46.383830304Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 13 23:19:46.384155 containerd[1598]: time="2025-12-13T23:19:46.384129370Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 13 23:19:46.384190 containerd[1598]: time="2025-12-13T23:19:46.384170593Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 13 23:19:46.384190 containerd[1598]: time="2025-12-13T23:19:46.384182536Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 13 23:19:46.384423 containerd[1598]: time="2025-12-13T23:19:46.384402352Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 13 23:19:46.385051 containerd[1598]: time="2025-12-13T23:19:46.385024930Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 13 23:19:46.385167 containerd[1598]: time="2025-12-13T23:19:46.385122515Z" level=info msg="metadata content store policy set" policy=shared Dec 13 23:19:46.389830 containerd[1598]: time="2025-12-13T23:19:46.389791213Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 13 23:19:46.389908 containerd[1598]: time="2025-12-13T23:19:46.389847535Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 13 23:19:46.390019 containerd[1598]: time="2025-12-13T23:19:46.389989738Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 13 23:19:46.390019 containerd[1598]: time="2025-12-13T23:19:46.390011228Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 13 23:19:46.390091 containerd[1598]: time="2025-12-13T23:19:46.390027805Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 13 23:19:46.390091 containerd[1598]: time="2025-12-13T23:19:46.390040468Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 13 23:19:46.390091 containerd[1598]: time="2025-12-13T23:19:46.390053050Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 13 23:19:46.390091 containerd[1598]: time="2025-12-13T23:19:46.390062477Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 13 23:19:46.390091 containerd[1598]: time="2025-12-13T23:19:46.390076458Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 13 23:19:46.390091 containerd[1598]: time="2025-12-13T23:19:46.390089240Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 13 23:19:46.390191 containerd[1598]: time="2025-12-13T23:19:46.390100984Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 13 23:19:46.390191 containerd[1598]: time="2025-12-13T23:19:46.390111929Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 13 23:19:46.390191 containerd[1598]: time="2025-12-13T23:19:46.390121675Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 13 23:19:46.390191 containerd[1598]: time="2025-12-13T23:19:46.390133539Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 13 23:19:46.390362 containerd[1598]: time="2025-12-13T23:19:46.390254332Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 13 23:19:46.390362 containerd[1598]: time="2025-12-13T23:19:46.390282812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 13 23:19:46.390362 containerd[1598]: time="2025-12-13T23:19:46.390298071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 13 23:19:46.390362 containerd[1598]: time="2025-12-13T23:19:46.390308976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 13 23:19:46.390362 containerd[1598]: time="2025-12-13T23:19:46.390325953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 13 23:19:46.390362 containerd[1598]: time="2025-12-13T23:19:46.390336019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 13 23:19:46.390362 containerd[1598]: time="2025-12-13T23:19:46.390347363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 13 23:19:46.390527 containerd[1598]: time="2025-12-13T23:19:46.390368334Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 13 23:19:46.390527 containerd[1598]: time="2025-12-13T23:19:46.390385630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 13 23:19:46.390527 containerd[1598]: time="2025-12-13T23:19:46.390395736Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 13 23:19:46.390527 containerd[1598]: time="2025-12-13T23:19:46.390405403Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 13 23:19:46.390527 containerd[1598]: time="2025-12-13T23:19:46.390430967Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 13 23:19:46.390682 containerd[1598]: time="2025-12-13T23:19:46.390622981Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 13 23:19:46.390682 containerd[1598]: time="2025-12-13T23:19:46.390650743Z" level=info msg="Start snapshots syncer" Dec 13 23:19:46.390840 containerd[1598]: time="2025-12-13T23:19:46.390679224Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 13 23:19:46.391340 containerd[1598]: time="2025-12-13T23:19:46.391183246Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 13 23:19:46.391340 containerd[1598]: time="2025-12-13T23:19:46.391252150Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 13 23:19:46.391662 containerd[1598]: time="2025-12-13T23:19:46.391638096Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 13 23:19:46.391781 containerd[1598]: time="2025-12-13T23:19:46.391762444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 13 23:19:46.391812 containerd[1598]: time="2025-12-13T23:19:46.391792043Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 13 23:19:46.391812 containerd[1598]: time="2025-12-13T23:19:46.391803427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 13 23:19:46.391856 containerd[1598]: time="2025-12-13T23:19:46.391812655Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 13 23:19:46.391856 containerd[1598]: time="2025-12-13T23:19:46.391825077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 13 23:19:46.391856 containerd[1598]: time="2025-12-13T23:19:46.391835862Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 13 23:19:46.391856 containerd[1598]: time="2025-12-13T23:19:46.391849084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 13 23:19:46.391936 containerd[1598]: time="2025-12-13T23:19:46.391870574Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 13 23:19:46.391936 containerd[1598]: time="2025-12-13T23:19:46.391881759Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 13 23:19:46.392214 containerd[1598]: time="2025-12-13T23:19:46.392164767Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 13 23:19:46.392214 containerd[1598]: time="2025-12-13T23:19:46.392196044Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 13 23:19:46.392350 containerd[1598]: time="2025-12-13T23:19:46.392206909Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 13 23:19:46.392379 containerd[1598]: time="2025-12-13T23:19:46.392352827Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 13 23:19:46.392379 containerd[1598]: time="2025-12-13T23:19:46.392363812Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 13 23:19:46.392379 containerd[1598]: time="2025-12-13T23:19:46.392375475Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 13 23:19:46.392474 containerd[1598]: time="2025-12-13T23:19:46.392405434Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 13 23:19:46.392496 containerd[1598]: time="2025-12-13T23:19:46.392481369Z" level=info msg="runtime interface created" Dec 13 23:19:46.392496 containerd[1598]: time="2025-12-13T23:19:46.392487361Z" level=info msg="created NRI interface" Dec 13 23:19:46.392530 containerd[1598]: time="2025-12-13T23:19:46.392497307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 13 23:19:46.392530 containerd[1598]: time="2025-12-13T23:19:46.392509849Z" level=info msg="Connect containerd service" Dec 13 23:19:46.392569 containerd[1598]: time="2025-12-13T23:19:46.392542764Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 23:19:46.394403 containerd[1598]: time="2025-12-13T23:19:46.394374029Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 23:19:46.514489 containerd[1598]: time="2025-12-13T23:19:46.514350593Z" level=info msg="Start subscribing containerd event" Dec 13 23:19:46.514489 containerd[1598]: time="2025-12-13T23:19:46.514436754Z" level=info msg="Start recovering state" Dec 13 23:19:46.515447 containerd[1598]: time="2025-12-13T23:19:46.515422070Z" level=info msg="Start event monitor" Dec 13 23:19:46.515504 containerd[1598]: time="2025-12-13T23:19:46.515492293Z" level=info msg="Start cni network conf syncer for default" Dec 13 23:19:46.515586 containerd[1598]: time="2025-12-13T23:19:46.515507752Z" level=info msg="Start streaming server" Dec 13 23:19:46.516276 containerd[1598]: time="2025-12-13T23:19:46.516247967Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 13 23:19:46.516276 containerd[1598]: time="2025-12-13T23:19:46.516276008Z" level=info msg="runtime interface starting up..." Dec 13 23:19:46.516345 containerd[1598]: time="2025-12-13T23:19:46.516286114Z" level=info msg="starting plugins..." Dec 13 23:19:46.516345 containerd[1598]: time="2025-12-13T23:19:46.516309522Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 13 23:19:46.516592 containerd[1598]: time="2025-12-13T23:19:46.516570720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 23:19:46.516638 containerd[1598]: time="2025-12-13T23:19:46.516623447Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 23:19:46.518539 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 23:19:46.521553 containerd[1598]: time="2025-12-13T23:19:46.521462109Z" level=info msg="containerd successfully booted in 0.166612s" Dec 13 23:19:46.531456 tar[1572]: linux-arm64/README.md Dec 13 23:19:46.555018 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 23:19:46.847235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 23:19:46.848790 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 23:19:46.851399 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 23:19:46.851431 systemd[1]: Startup finished in 1.394s (kernel) + 3.814s (initrd) + 3.084s (userspace) = 8.293s. Dec 13 23:19:47.207438 kubelet[1682]: E1213 23:19:47.207402 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 23:19:47.209536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 23:19:47.209660 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 23:19:47.210033 systemd[1]: kubelet.service: Consumed 763ms CPU time, 257.6M memory peak. Dec 13 23:19:51.266786 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 23:19:51.267999 systemd[1]: Started sshd@0-10.0.0.118:22-10.0.0.1:33882.service - OpenSSH per-connection server daemon (10.0.0.1:33882). Dec 13 23:19:51.348288 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 33882 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 23:19:51.350594 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:19:51.357203 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 23:19:51.358721 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 23:19:51.366137 systemd-logind[1556]: New session 1 of user core. Dec 13 23:19:51.386920 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 23:19:51.391115 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 23:19:51.409858 (systemd)[1701]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:19:51.412475 systemd-logind[1556]: New session 2 of user core. Dec 13 23:19:51.635477 systemd[1701]: Queued start job for default target default.target. Dec 13 23:19:51.643990 systemd[1701]: Created slice app.slice - User Application Slice. Dec 13 23:19:51.644019 systemd[1701]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 13 23:19:51.644031 systemd[1701]: Reached target paths.target - Paths. Dec 13 23:19:51.644087 systemd[1701]: Reached target timers.target - Timers. Dec 13 23:19:51.645281 systemd[1701]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 23:19:51.646068 systemd[1701]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 13 23:19:51.655883 systemd[1701]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 23:19:51.656736 systemd[1701]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 13 23:19:51.656912 systemd[1701]: Reached target sockets.target - Sockets. Dec 13 23:19:51.656982 systemd[1701]: Reached target basic.target - Basic System. Dec 13 23:19:51.657011 systemd[1701]: Reached target default.target - Main User Target. Dec 13 23:19:51.657037 systemd[1701]: Startup finished in 239ms. Dec 13 23:19:51.657177 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 23:19:51.658776 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 23:19:51.668994 systemd[1]: Started sshd@1-10.0.0.118:22-10.0.0.1:33886.service - OpenSSH per-connection server daemon (10.0.0.1:33886). Dec 13 23:19:51.722823 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 33886 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 23:19:51.724369 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:19:51.728779 systemd-logind[1556]: New session 3 of user core. Dec 13 23:19:51.738212 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 23:19:51.750032 sshd[1720]: Connection closed by 10.0.0.1 port 33886 Dec 13 23:19:51.750520 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Dec 13 23:19:51.760311 systemd[1]: sshd@1-10.0.0.118:22-10.0.0.1:33886.service: Deactivated successfully. Dec 13 23:19:51.762073 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 23:19:51.764565 systemd-logind[1556]: Session 3 logged out. Waiting for processes to exit. Dec 13 23:19:51.768330 systemd[1]: Started sshd@2-10.0.0.118:22-10.0.0.1:33902.service - OpenSSH per-connection server daemon (10.0.0.1:33902). Dec 13 23:19:51.769035 systemd-logind[1556]: Removed session 3. Dec 13 23:19:51.836444 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 33902 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 23:19:51.837852 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:19:51.842897 systemd-logind[1556]: New session 4 of user core. Dec 13 23:19:51.853140 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 23:19:51.860674 sshd[1730]: Connection closed by 10.0.0.1 port 33902 Dec 13 23:19:51.861129 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Dec 13 23:19:51.873588 systemd[1]: sshd@2-10.0.0.118:22-10.0.0.1:33902.service: Deactivated successfully. Dec 13 23:19:51.876442 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 23:19:51.877148 systemd-logind[1556]: Session 4 logged out. Waiting for processes to exit. Dec 13 23:19:51.879673 systemd[1]: Started sshd@3-10.0.0.118:22-10.0.0.1:33914.service - OpenSSH per-connection server daemon (10.0.0.1:33914). Dec 13 23:19:51.880255 systemd-logind[1556]: Removed session 4. Dec 13 23:19:51.937571 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 33914 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 23:19:51.938993 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:19:51.943633 systemd-logind[1556]: New session 5 of user core. Dec 13 23:19:51.961192 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 23:19:51.972319 sshd[1740]: Connection closed by 10.0.0.1 port 33914 Dec 13 23:19:51.972745 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Dec 13 23:19:51.983259 systemd[1]: sshd@3-10.0.0.118:22-10.0.0.1:33914.service: Deactivated successfully. Dec 13 23:19:51.985043 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 23:19:51.986827 systemd-logind[1556]: Session 5 logged out. Waiting for processes to exit. Dec 13 23:19:51.989033 systemd-logind[1556]: Removed session 5. Dec 13 23:19:51.991055 systemd[1]: Started sshd@4-10.0.0.118:22-10.0.0.1:33922.service - OpenSSH per-connection server daemon (10.0.0.1:33922). Dec 13 23:19:52.053605 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 33922 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 23:19:52.054943 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:19:52.059940 systemd-logind[1556]: New session 6 of user core. Dec 13 23:19:52.078209 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 23:19:52.096518 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 23:19:52.096783 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 23:19:52.383250 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 23:19:52.398455 (dockerd)[1773]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 23:19:52.645998 dockerd[1773]: time="2025-12-13T23:19:52.645704691Z" level=info msg="Starting up" Dec 13 23:19:52.648218 dockerd[1773]: time="2025-12-13T23:19:52.648176808Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 13 23:19:52.659281 dockerd[1773]: time="2025-12-13T23:19:52.659234980Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 13 23:19:52.697196 dockerd[1773]: time="2025-12-13T23:19:52.697149524Z" level=info msg="Loading containers: start." Dec 13 23:19:52.706999 kernel: Initializing XFRM netlink socket Dec 13 23:19:52.907481 systemd-networkd[1290]: docker0: Link UP Dec 13 23:19:52.911331 dockerd[1773]: time="2025-12-13T23:19:52.911299395Z" level=info msg="Loading containers: done." Dec 13 23:19:52.923920 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2421115899-merged.mount: Deactivated successfully. Dec 13 23:19:52.925529 dockerd[1773]: time="2025-12-13T23:19:52.925482351Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 23:19:52.925867 dockerd[1773]: time="2025-12-13T23:19:52.925829265Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 13 23:19:52.926610 dockerd[1773]: time="2025-12-13T23:19:52.926572967Z" level=info msg="Initializing buildkit" Dec 13 23:19:52.950807 dockerd[1773]: time="2025-12-13T23:19:52.950760485Z" level=info msg="Completed buildkit initialization" Dec 13 23:19:52.956033 dockerd[1773]: time="2025-12-13T23:19:52.955977664Z" level=info msg="Daemon has completed initialization" Dec 13 23:19:52.956179 dockerd[1773]: time="2025-12-13T23:19:52.956065822Z" level=info msg="API listen on /run/docker.sock" Dec 13 23:19:52.956311 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 23:19:53.435079 containerd[1598]: time="2025-12-13T23:19:53.435028878Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 13 23:19:53.980098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount30456725.mount: Deactivated successfully. Dec 13 23:19:54.717501 containerd[1598]: time="2025-12-13T23:19:54.717448536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:19:54.719209 containerd[1598]: time="2025-12-13T23:19:54.719124073Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=24835766" Dec 13 23:19:54.719926 containerd[1598]: time="2025-12-13T23:19:54.719869738Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:19:54.723743 containerd[1598]: time="2025-12-13T23:19:54.723312455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:19:54.724332 containerd[1598]: time="2025-12-13T23:19:54.724293366Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 1.289214052s" Dec 13 23:19:54.724332 containerd[1598]: time="2025-12-13T23:19:54.724325659Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 13 23:19:54.725286 containerd[1598]: time="2025-12-13T23:19:54.725257250Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 13 23:19:55.728719 containerd[1598]: time="2025-12-13T23:19:55.728679223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:19:55.729747 containerd[1598]: time="2025-12-13T23:19:55.729707587Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22610801" Dec 13 23:19:55.731164 containerd[1598]: time="2025-12-13T23:19:55.731127288Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:19:55.734125 containerd[1598]: time="2025-12-13T23:19:55.734084999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:19:55.735600 containerd[1598]: time="2025-12-13T23:19:55.735575166Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.010286782s" Dec 13 23:19:55.735652 containerd[1598]: time="2025-12-13T23:19:55.735613536Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 13 23:19:55.736032 containerd[1598]: time="2025-12-13T23:19:55.736011508Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 13 23:19:56.796481 containerd[1598]: time="2025-12-13T23:19:56.796419967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:19:56.797142 containerd[1598]: time="2025-12-13T23:19:56.797090920Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17610300" Dec 13 23:19:56.798033 containerd[1598]: time="2025-12-13T23:19:56.798009094Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:19:56.800990 containerd[1598]: time="2025-12-13T23:19:56.800423143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:19:56.802280 containerd[1598]: time="2025-12-13T23:19:56.802253015Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.066213848s" Dec 13 23:19:56.802402 containerd[1598]: time="2025-12-13T23:19:56.802384000Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 13 23:19:56.802850 containerd[1598]: time="2025-12-13T23:19:56.802830077Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 13 23:19:57.460115 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 23:19:57.461880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 23:19:57.661144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 23:19:57.675637 (kubelet)[2070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 23:19:57.716030 kubelet[2070]: E1213 23:19:57.715878 2070 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 23:19:57.718887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 23:19:57.719031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 23:19:57.719364 systemd[1]: kubelet.service: Consumed 144ms CPU time, 107.5M memory peak. Dec 13 23:19:57.902096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount632103264.mount: Deactivated successfully. Dec 13 23:19:58.228422 containerd[1598]: time="2025-12-13T23:19:58.228362875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:19:58.229513 containerd[1598]: time="2025-12-13T23:19:58.229296000Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27558078" Dec 13 23:19:58.230138 containerd[1598]: time="2025-12-13T23:19:58.230105084Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:19:58.232149 containerd[1598]: time="2025-12-13T23:19:58.232111965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:19:58.232874 containerd[1598]: time="2025-12-13T23:19:58.232761111Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.429814798s" Dec 13 23:19:58.232874 containerd[1598]: time="2025-12-13T23:19:58.232792291Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 13 23:19:58.233222 containerd[1598]: time="2025-12-13T23:19:58.233200870Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 13 23:19:58.877633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount924153380.mount: Deactivated successfully. Dec 13 23:19:59.415065 containerd[1598]: time="2025-12-13T23:19:59.415021504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:19:59.416155 containerd[1598]: time="2025-12-13T23:19:59.416087747Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=15956282" Dec 13 23:19:59.417469 containerd[1598]: time="2025-12-13T23:19:59.417421550Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:19:59.420782 containerd[1598]: time="2025-12-13T23:19:59.420751559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:19:59.421983 containerd[1598]: time="2025-12-13T23:19:59.421650942Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.188419131s" Dec 13 23:19:59.421983 containerd[1598]: time="2025-12-13T23:19:59.421677366Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 13 23:19:59.422218 containerd[1598]: time="2025-12-13T23:19:59.422198734Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 23:19:59.908288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4110861722.mount: Deactivated successfully. Dec 13 23:19:59.915085 containerd[1598]: time="2025-12-13T23:19:59.915020247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 23:19:59.916044 containerd[1598]: time="2025-12-13T23:19:59.915786709Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 13 23:19:59.917035 containerd[1598]: time="2025-12-13T23:19:59.917003421Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 23:19:59.919050 containerd[1598]: time="2025-12-13T23:19:59.919018417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 23:19:59.919849 containerd[1598]: time="2025-12-13T23:19:59.919643004Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 497.336534ms" Dec 13 23:19:59.919849 containerd[1598]: time="2025-12-13T23:19:59.919676863Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 13 23:19:59.920166 containerd[1598]: time="2025-12-13T23:19:59.920133431Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 13 23:20:00.476375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount907170910.mount: Deactivated successfully. Dec 13 23:20:02.434780 containerd[1598]: time="2025-12-13T23:20:02.434677429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:20:02.435441 containerd[1598]: time="2025-12-13T23:20:02.435391557Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=66060366" Dec 13 23:20:02.436537 containerd[1598]: time="2025-12-13T23:20:02.436489856Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:20:02.439687 containerd[1598]: time="2025-12-13T23:20:02.439627111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:20:02.440920 containerd[1598]: time="2025-12-13T23:20:02.440644810Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.520479158s" Dec 13 23:20:02.440920 containerd[1598]: time="2025-12-13T23:20:02.440681752Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 13 23:20:07.431092 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 23:20:07.431237 systemd[1]: kubelet.service: Consumed 144ms CPU time, 107.5M memory peak. Dec 13 23:20:07.433191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 23:20:07.505440 systemd[1]: Reload requested from client PID 2225 ('systemctl') (unit session-6.scope)... Dec 13 23:20:07.505592 systemd[1]: Reloading... Dec 13 23:20:07.583078 zram_generator::config[2274]: No configuration found. Dec 13 23:20:07.798108 systemd[1]: Reloading finished in 292 ms. Dec 13 23:20:07.850416 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 23:20:07.850498 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 23:20:07.852013 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 23:20:07.852064 systemd[1]: kubelet.service: Consumed 88ms CPU time, 95.2M memory peak. Dec 13 23:20:07.853519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 23:20:07.971583 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 23:20:07.975497 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 23:20:08.008608 kubelet[2316]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 23:20:08.008608 kubelet[2316]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 13 23:20:08.008608 kubelet[2316]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 23:20:08.008944 kubelet[2316]: I1213 23:20:08.008666 2316 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 23:20:08.568296 kubelet[2316]: I1213 23:20:08.568058 2316 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 13 23:20:08.568296 kubelet[2316]: I1213 23:20:08.568090 2316 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 23:20:08.568730 kubelet[2316]: I1213 23:20:08.568702 2316 server.go:954] "Client rotation is on, will bootstrap in background" Dec 13 23:20:08.593998 kubelet[2316]: E1213 23:20:08.593925 2316 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Dec 13 23:20:08.596471 kubelet[2316]: I1213 23:20:08.596437 2316 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 23:20:08.601912 kubelet[2316]: I1213 23:20:08.601893 2316 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 13 23:20:08.605345 kubelet[2316]: I1213 23:20:08.605312 2316 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 23:20:08.605975 kubelet[2316]: I1213 23:20:08.605929 2316 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 23:20:08.606148 kubelet[2316]: I1213 23:20:08.605976 2316 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 23:20:08.606232 kubelet[2316]: I1213 23:20:08.606224 2316 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 23:20:08.606257 kubelet[2316]: I1213 23:20:08.606234 2316 container_manager_linux.go:304] "Creating device plugin manager" Dec 13 23:20:08.606454 kubelet[2316]: I1213 23:20:08.606439 2316 state_mem.go:36] "Initialized new in-memory state store" Dec 13 23:20:08.608967 kubelet[2316]: I1213 23:20:08.608929 2316 kubelet.go:446] "Attempting to sync node with API server" Dec 13 23:20:08.609090 kubelet[2316]: I1213 23:20:08.609066 2316 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 23:20:08.609154 kubelet[2316]: I1213 23:20:08.609099 2316 kubelet.go:352] "Adding apiserver pod source" Dec 13 23:20:08.609154 kubelet[2316]: I1213 23:20:08.609111 2316 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 23:20:08.615193 kubelet[2316]: I1213 23:20:08.613784 2316 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 13 23:20:08.615193 kubelet[2316]: W1213 23:20:08.614142 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Dec 13 23:20:08.615193 kubelet[2316]: E1213 23:20:08.614279 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Dec 13 23:20:08.615193 kubelet[2316]: W1213 23:20:08.614141 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Dec 13 23:20:08.615193 kubelet[2316]: E1213 23:20:08.614331 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Dec 13 23:20:08.615193 kubelet[2316]: I1213 23:20:08.614487 2316 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 23:20:08.615193 kubelet[2316]: W1213 23:20:08.614604 2316 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 23:20:08.615840 kubelet[2316]: I1213 23:20:08.615819 2316 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 13 23:20:08.615899 kubelet[2316]: I1213 23:20:08.615863 2316 server.go:1287] "Started kubelet" Dec 13 23:20:08.617296 kubelet[2316]: I1213 23:20:08.617262 2316 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 23:20:08.620507 kubelet[2316]: I1213 23:20:08.619333 2316 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 23:20:08.620507 kubelet[2316]: I1213 23:20:08.619685 2316 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 23:20:08.620507 kubelet[2316]: I1213 23:20:08.620023 2316 server.go:479] "Adding debug handlers to kubelet server" Dec 13 23:20:08.621169 kubelet[2316]: E1213 23:20:08.620832 2316 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.118:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.118:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1880e9b2ecd1815e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-13 23:20:08.615838046 +0000 UTC m=+0.637528738,LastTimestamp:2025-12-13 23:20:08.615838046 +0000 UTC m=+0.637528738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 23:20:08.622124 kubelet[2316]: I1213 23:20:08.622103 2316 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 23:20:08.622429 kubelet[2316]: I1213 23:20:08.622348 2316 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 23:20:08.623016 kubelet[2316]: E1213 23:20:08.622864 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 23:20:08.623016 kubelet[2316]: I1213 23:20:08.622898 2316 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 13 23:20:08.623127 kubelet[2316]: I1213 23:20:08.623104 2316 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 13 23:20:08.623181 kubelet[2316]: I1213 23:20:08.623154 2316 reconciler.go:26] "Reconciler: start to sync state" Dec 13 23:20:08.623527 kubelet[2316]: W1213 23:20:08.623486 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Dec 13 23:20:08.623582 kubelet[2316]: E1213 23:20:08.623532 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Dec 13 23:20:08.623582 kubelet[2316]: I1213 23:20:08.623548 2316 factory.go:221] Registration of the systemd container factory successfully Dec 13 23:20:08.623620 kubelet[2316]: E1213 23:20:08.623612 2316 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 23:20:08.623648 kubelet[2316]: I1213 23:20:08.623618 2316 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 23:20:08.624035 kubelet[2316]: E1213 23:20:08.623984 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="200ms" Dec 13 23:20:08.624730 kubelet[2316]: I1213 23:20:08.624707 2316 factory.go:221] Registration of the containerd container factory successfully Dec 13 23:20:08.637334 kubelet[2316]: I1213 23:20:08.637311 2316 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 13 23:20:08.637334 kubelet[2316]: I1213 23:20:08.637330 2316 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 13 23:20:08.637446 kubelet[2316]: I1213 23:20:08.637348 2316 state_mem.go:36] "Initialized new in-memory state store" Dec 13 23:20:08.638682 kubelet[2316]: I1213 23:20:08.638541 2316 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 23:20:08.639782 kubelet[2316]: I1213 23:20:08.639763 2316 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 23:20:08.639878 kubelet[2316]: I1213 23:20:08.639868 2316 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 13 23:20:08.639937 kubelet[2316]: I1213 23:20:08.639927 2316 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 13 23:20:08.639994 kubelet[2316]: I1213 23:20:08.639985 2316 kubelet.go:2382] "Starting kubelet main sync loop" Dec 13 23:20:08.640095 kubelet[2316]: E1213 23:20:08.640079 2316 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 23:20:08.641137 kubelet[2316]: W1213 23:20:08.641087 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Dec 13 23:20:08.641216 kubelet[2316]: E1213 23:20:08.641145 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Dec 13 23:20:08.712068 kubelet[2316]: I1213 23:20:08.712028 2316 policy_none.go:49] "None policy: Start" Dec 13 23:20:08.712068 kubelet[2316]: I1213 23:20:08.712060 2316 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 13 23:20:08.712068 kubelet[2316]: I1213 23:20:08.712074 2316 state_mem.go:35] "Initializing new in-memory state store" Dec 13 23:20:08.718039 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 23:20:08.723987 kubelet[2316]: E1213 23:20:08.723942 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 23:20:08.727761 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 23:20:08.730591 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 23:20:08.740490 kubelet[2316]: E1213 23:20:08.740470 2316 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 23:20:08.740771 kubelet[2316]: I1213 23:20:08.740682 2316 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 23:20:08.740908 kubelet[2316]: I1213 23:20:08.740891 2316 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 23:20:08.740936 kubelet[2316]: I1213 23:20:08.740905 2316 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 23:20:08.741190 kubelet[2316]: I1213 23:20:08.741165 2316 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 23:20:08.742578 kubelet[2316]: E1213 23:20:08.742556 2316 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 13 23:20:08.742641 kubelet[2316]: E1213 23:20:08.742602 2316 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 23:20:08.824926 kubelet[2316]: E1213 23:20:08.824806 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="400ms" Dec 13 23:20:08.843081 kubelet[2316]: I1213 23:20:08.843052 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 23:20:08.843535 kubelet[2316]: E1213 23:20:08.843505 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Dec 13 23:20:08.950260 systemd[1]: Created slice kubepods-burstable-pod43d30e7ed0f944f0e0d7ff401d5c45ca.slice - libcontainer container kubepods-burstable-pod43d30e7ed0f944f0e0d7ff401d5c45ca.slice. Dec 13 23:20:08.975136 kubelet[2316]: E1213 23:20:08.975056 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 23:20:08.977646 systemd[1]: Created slice kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice - libcontainer container kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice. Dec 13 23:20:08.979085 kubelet[2316]: E1213 23:20:08.979064 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 23:20:08.980994 systemd[1]: Created slice kubepods-burstable-pod0a68423804124305a9de061f38780871.slice - libcontainer container kubepods-burstable-pod0a68423804124305a9de061f38780871.slice. Dec 13 23:20:08.982408 kubelet[2316]: E1213 23:20:08.982368 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 23:20:09.025784 kubelet[2316]: I1213 23:20:09.025526 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43d30e7ed0f944f0e0d7ff401d5c45ca-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"43d30e7ed0f944f0e0d7ff401d5c45ca\") " pod="kube-system/kube-apiserver-localhost" Dec 13 23:20:09.025784 kubelet[2316]: I1213 23:20:09.025577 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:20:09.025784 kubelet[2316]: I1213 23:20:09.025602 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:20:09.025784 kubelet[2316]: I1213 23:20:09.025618 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:20:09.025784 kubelet[2316]: I1213 23:20:09.025636 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 13 23:20:09.026228 kubelet[2316]: I1213 23:20:09.025651 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43d30e7ed0f944f0e0d7ff401d5c45ca-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"43d30e7ed0f944f0e0d7ff401d5c45ca\") " pod="kube-system/kube-apiserver-localhost" Dec 13 23:20:09.026228 kubelet[2316]: I1213 23:20:09.025665 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43d30e7ed0f944f0e0d7ff401d5c45ca-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"43d30e7ed0f944f0e0d7ff401d5c45ca\") " pod="kube-system/kube-apiserver-localhost" Dec 13 23:20:09.026228 kubelet[2316]: I1213 23:20:09.025678 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:20:09.026228 kubelet[2316]: I1213 23:20:09.025695 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:20:09.045634 kubelet[2316]: I1213 23:20:09.045595 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 23:20:09.046027 kubelet[2316]: E1213 23:20:09.045978 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Dec 13 23:20:09.225800 kubelet[2316]: E1213 23:20:09.225678 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="800ms" Dec 13 23:20:09.276278 kubelet[2316]: E1213 23:20:09.276181 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:09.276869 containerd[1598]: time="2025-12-13T23:20:09.276822811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:43d30e7ed0f944f0e0d7ff401d5c45ca,Namespace:kube-system,Attempt:0,}" Dec 13 23:20:09.280075 kubelet[2316]: E1213 23:20:09.280050 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:09.280715 containerd[1598]: time="2025-12-13T23:20:09.280550963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,}" Dec 13 23:20:09.282840 kubelet[2316]: E1213 23:20:09.282812 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:09.283227 containerd[1598]: time="2025-12-13T23:20:09.283197213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,}" Dec 13 23:20:09.310435 containerd[1598]: time="2025-12-13T23:20:09.310277527Z" level=info msg="connecting to shim 60ce6b69edd31ca21b3047d52e42434d96b64b9583ff4a629dccbfba0dcfc44a" address="unix:///run/containerd/s/a605f2a0fd80f7d5aa10ecb6d84e21d99b380fb9208fa3750d20da64bf22c780" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:20:09.315476 containerd[1598]: time="2025-12-13T23:20:09.315433071Z" level=info msg="connecting to shim 626caa34a944124dd5b01c5395c1580dd6adf5073262ddd4a01f1e1a6d51c184" address="unix:///run/containerd/s/4a91da87cf7ff9ce99e7ddf423ac8c3963bed918fb21c62fb9a7b2d4b2f562d7" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:20:09.327010 containerd[1598]: time="2025-12-13T23:20:09.326971975Z" level=info msg="connecting to shim d936900ee4a634403b09c11150cdca55949a21d636b24bcf0fdeb9a8eaf4f430" address="unix:///run/containerd/s/bd1c3dfc5e81a37ddca7469bbdf6c43eb8f692030a095115e13b9e3e5d9278e9" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:20:09.341188 systemd[1]: Started cri-containerd-60ce6b69edd31ca21b3047d52e42434d96b64b9583ff4a629dccbfba0dcfc44a.scope - libcontainer container 60ce6b69edd31ca21b3047d52e42434d96b64b9583ff4a629dccbfba0dcfc44a. Dec 13 23:20:09.344639 systemd[1]: Started cri-containerd-626caa34a944124dd5b01c5395c1580dd6adf5073262ddd4a01f1e1a6d51c184.scope - libcontainer container 626caa34a944124dd5b01c5395c1580dd6adf5073262ddd4a01f1e1a6d51c184. Dec 13 23:20:09.362156 systemd[1]: Started cri-containerd-d936900ee4a634403b09c11150cdca55949a21d636b24bcf0fdeb9a8eaf4f430.scope - libcontainer container d936900ee4a634403b09c11150cdca55949a21d636b24bcf0fdeb9a8eaf4f430. Dec 13 23:20:09.397694 containerd[1598]: time="2025-12-13T23:20:09.397630391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:43d30e7ed0f944f0e0d7ff401d5c45ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"60ce6b69edd31ca21b3047d52e42434d96b64b9583ff4a629dccbfba0dcfc44a\"" Dec 13 23:20:09.400523 kubelet[2316]: E1213 23:20:09.400491 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:09.400710 containerd[1598]: time="2025-12-13T23:20:09.400674037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"626caa34a944124dd5b01c5395c1580dd6adf5073262ddd4a01f1e1a6d51c184\"" Dec 13 23:20:09.402914 containerd[1598]: time="2025-12-13T23:20:09.402882225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,} returns sandbox id \"d936900ee4a634403b09c11150cdca55949a21d636b24bcf0fdeb9a8eaf4f430\"" Dec 13 23:20:09.403596 kubelet[2316]: E1213 23:20:09.403572 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:09.403888 kubelet[2316]: E1213 23:20:09.403594 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:09.404518 containerd[1598]: time="2025-12-13T23:20:09.404488442Z" level=info msg="CreateContainer within sandbox \"60ce6b69edd31ca21b3047d52e42434d96b64b9583ff4a629dccbfba0dcfc44a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 23:20:09.407030 containerd[1598]: time="2025-12-13T23:20:09.406842144Z" level=info msg="CreateContainer within sandbox \"d936900ee4a634403b09c11150cdca55949a21d636b24bcf0fdeb9a8eaf4f430\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 23:20:09.407088 containerd[1598]: time="2025-12-13T23:20:09.406919040Z" level=info msg="CreateContainer within sandbox \"626caa34a944124dd5b01c5395c1580dd6adf5073262ddd4a01f1e1a6d51c184\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 23:20:09.415382 containerd[1598]: time="2025-12-13T23:20:09.415349838Z" level=info msg="Container 1a6fc896b8ecfce6d02babad534a07518f201817e8ee343b3c1f1c6ae85bb239: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:20:09.418910 containerd[1598]: time="2025-12-13T23:20:09.418875173Z" level=info msg="Container 443ce9e44221d6ba7108ceb157421c1459ecece3c9ea9132d2f88b7a556e8937: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:20:09.421232 containerd[1598]: time="2025-12-13T23:20:09.421200924Z" level=info msg="Container 79bdc03dd26acae0f7034d23c12c25c04f0fec019ff855d37e6b7a119f17adab: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:20:09.427316 containerd[1598]: time="2025-12-13T23:20:09.427243631Z" level=info msg="CreateContainer within sandbox \"60ce6b69edd31ca21b3047d52e42434d96b64b9583ff4a629dccbfba0dcfc44a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1a6fc896b8ecfce6d02babad534a07518f201817e8ee343b3c1f1c6ae85bb239\"" Dec 13 23:20:09.428229 containerd[1598]: time="2025-12-13T23:20:09.428205849Z" level=info msg="StartContainer for \"1a6fc896b8ecfce6d02babad534a07518f201817e8ee343b3c1f1c6ae85bb239\"" Dec 13 23:20:09.430976 containerd[1598]: time="2025-12-13T23:20:09.430662839Z" level=info msg="connecting to shim 1a6fc896b8ecfce6d02babad534a07518f201817e8ee343b3c1f1c6ae85bb239" address="unix:///run/containerd/s/a605f2a0fd80f7d5aa10ecb6d84e21d99b380fb9208fa3750d20da64bf22c780" protocol=ttrpc version=3 Dec 13 23:20:09.433506 containerd[1598]: time="2025-12-13T23:20:09.433469640Z" level=info msg="CreateContainer within sandbox \"626caa34a944124dd5b01c5395c1580dd6adf5073262ddd4a01f1e1a6d51c184\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"443ce9e44221d6ba7108ceb157421c1459ecece3c9ea9132d2f88b7a556e8937\"" Dec 13 23:20:09.434263 containerd[1598]: time="2025-12-13T23:20:09.434228962Z" level=info msg="CreateContainer within sandbox \"d936900ee4a634403b09c11150cdca55949a21d636b24bcf0fdeb9a8eaf4f430\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"79bdc03dd26acae0f7034d23c12c25c04f0fec019ff855d37e6b7a119f17adab\"" Dec 13 23:20:09.434363 containerd[1598]: time="2025-12-13T23:20:09.434330890Z" level=info msg="StartContainer for \"443ce9e44221d6ba7108ceb157421c1459ecece3c9ea9132d2f88b7a556e8937\"" Dec 13 23:20:09.434566 containerd[1598]: time="2025-12-13T23:20:09.434535545Z" level=info msg="StartContainer for \"79bdc03dd26acae0f7034d23c12c25c04f0fec019ff855d37e6b7a119f17adab\"" Dec 13 23:20:09.435498 containerd[1598]: time="2025-12-13T23:20:09.435456497Z" level=info msg="connecting to shim 443ce9e44221d6ba7108ceb157421c1459ecece3c9ea9132d2f88b7a556e8937" address="unix:///run/containerd/s/4a91da87cf7ff9ce99e7ddf423ac8c3963bed918fb21c62fb9a7b2d4b2f562d7" protocol=ttrpc version=3 Dec 13 23:20:09.435548 containerd[1598]: time="2025-12-13T23:20:09.435521117Z" level=info msg="connecting to shim 79bdc03dd26acae0f7034d23c12c25c04f0fec019ff855d37e6b7a119f17adab" address="unix:///run/containerd/s/bd1c3dfc5e81a37ddca7469bbdf6c43eb8f692030a095115e13b9e3e5d9278e9" protocol=ttrpc version=3 Dec 13 23:20:09.450285 kubelet[2316]: I1213 23:20:09.450248 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 23:20:09.450737 kubelet[2316]: E1213 23:20:09.450701 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Dec 13 23:20:09.457205 systemd[1]: Started cri-containerd-1a6fc896b8ecfce6d02babad534a07518f201817e8ee343b3c1f1c6ae85bb239.scope - libcontainer container 1a6fc896b8ecfce6d02babad534a07518f201817e8ee343b3c1f1c6ae85bb239. Dec 13 23:20:09.462322 systemd[1]: Started cri-containerd-443ce9e44221d6ba7108ceb157421c1459ecece3c9ea9132d2f88b7a556e8937.scope - libcontainer container 443ce9e44221d6ba7108ceb157421c1459ecece3c9ea9132d2f88b7a556e8937. Dec 13 23:20:09.463806 systemd[1]: Started cri-containerd-79bdc03dd26acae0f7034d23c12c25c04f0fec019ff855d37e6b7a119f17adab.scope - libcontainer container 79bdc03dd26acae0f7034d23c12c25c04f0fec019ff855d37e6b7a119f17adab. Dec 13 23:20:09.508757 containerd[1598]: time="2025-12-13T23:20:09.507947219Z" level=info msg="StartContainer for \"443ce9e44221d6ba7108ceb157421c1459ecece3c9ea9132d2f88b7a556e8937\" returns successfully" Dec 13 23:20:09.517374 containerd[1598]: time="2025-12-13T23:20:09.517337396Z" level=info msg="StartContainer for \"1a6fc896b8ecfce6d02babad534a07518f201817e8ee343b3c1f1c6ae85bb239\" returns successfully" Dec 13 23:20:09.521154 containerd[1598]: time="2025-12-13T23:20:09.521078464Z" level=info msg="StartContainer for \"79bdc03dd26acae0f7034d23c12c25c04f0fec019ff855d37e6b7a119f17adab\" returns successfully" Dec 13 23:20:09.569700 kubelet[2316]: W1213 23:20:09.569506 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Dec 13 23:20:09.571073 kubelet[2316]: E1213 23:20:09.571020 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Dec 13 23:20:09.649058 kubelet[2316]: E1213 23:20:09.649012 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 23:20:09.649153 kubelet[2316]: E1213 23:20:09.649142 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:09.652778 kubelet[2316]: E1213 23:20:09.652753 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 23:20:09.652873 kubelet[2316]: E1213 23:20:09.652855 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:09.654937 kubelet[2316]: E1213 23:20:09.654919 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 23:20:09.655078 kubelet[2316]: E1213 23:20:09.655060 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:10.252414 kubelet[2316]: I1213 23:20:10.252383 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 23:20:10.657995 kubelet[2316]: E1213 23:20:10.656557 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 23:20:10.657995 kubelet[2316]: E1213 23:20:10.656677 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:10.658993 kubelet[2316]: E1213 23:20:10.658833 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 23:20:10.659093 kubelet[2316]: E1213 23:20:10.659079 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:11.078508 kubelet[2316]: E1213 23:20:11.078194 2316 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 23:20:11.165831 kubelet[2316]: I1213 23:20:11.165788 2316 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 13 23:20:11.165831 kubelet[2316]: E1213 23:20:11.165832 2316 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 13 23:20:11.175857 kubelet[2316]: E1213 23:20:11.175822 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 23:20:11.276500 kubelet[2316]: E1213 23:20:11.276459 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 23:20:11.377153 kubelet[2316]: E1213 23:20:11.377003 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 23:20:11.477591 kubelet[2316]: E1213 23:20:11.477545 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 23:20:11.508279 kubelet[2316]: E1213 23:20:11.508095 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 23:20:11.508279 kubelet[2316]: E1213 23:20:11.508224 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:11.612205 kubelet[2316]: I1213 23:20:11.612160 2316 apiserver.go:52] "Watching apiserver" Dec 13 23:20:11.624212 kubelet[2316]: I1213 23:20:11.623892 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 23:20:11.624212 kubelet[2316]: I1213 23:20:11.623948 2316 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 13 23:20:11.631883 kubelet[2316]: E1213 23:20:11.631774 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 23:20:11.631883 kubelet[2316]: I1213 23:20:11.631808 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 23:20:11.633811 kubelet[2316]: E1213 23:20:11.633760 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 13 23:20:11.633811 kubelet[2316]: I1213 23:20:11.633788 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 23:20:11.635351 kubelet[2316]: E1213 23:20:11.635316 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 13 23:20:11.657221 kubelet[2316]: I1213 23:20:11.657137 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 23:20:11.659318 kubelet[2316]: E1213 23:20:11.659126 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 13 23:20:11.659318 kubelet[2316]: E1213 23:20:11.659266 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:13.412037 systemd[1]: Reload requested from client PID 2592 ('systemctl') (unit session-6.scope)... Dec 13 23:20:13.412054 systemd[1]: Reloading... Dec 13 23:20:13.499993 zram_generator::config[2638]: No configuration found. Dec 13 23:20:13.775161 systemd[1]: Reloading finished in 362 ms. Dec 13 23:20:13.802469 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 23:20:13.824062 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 23:20:13.824336 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 23:20:13.824397 systemd[1]: kubelet.service: Consumed 1.020s CPU time, 126.7M memory peak. Dec 13 23:20:13.826165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 23:20:13.964001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 23:20:13.968028 (kubelet)[2680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 23:20:14.019780 kubelet[2680]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 23:20:14.020134 kubelet[2680]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 13 23:20:14.020134 kubelet[2680]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 23:20:14.020256 kubelet[2680]: I1213 23:20:14.020213 2680 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 23:20:14.028282 kubelet[2680]: I1213 23:20:14.027301 2680 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 13 23:20:14.028282 kubelet[2680]: I1213 23:20:14.027329 2680 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 23:20:14.028282 kubelet[2680]: I1213 23:20:14.027578 2680 server.go:954] "Client rotation is on, will bootstrap in background" Dec 13 23:20:14.028982 kubelet[2680]: I1213 23:20:14.028919 2680 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 23:20:14.033537 kubelet[2680]: I1213 23:20:14.033502 2680 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 23:20:14.039406 kubelet[2680]: I1213 23:20:14.039378 2680 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 13 23:20:14.043190 kubelet[2680]: I1213 23:20:14.043131 2680 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 23:20:14.043336 kubelet[2680]: I1213 23:20:14.043312 2680 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 23:20:14.043506 kubelet[2680]: I1213 23:20:14.043337 2680 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 23:20:14.043580 kubelet[2680]: I1213 23:20:14.043517 2680 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 23:20:14.043580 kubelet[2680]: I1213 23:20:14.043525 2680 container_manager_linux.go:304] "Creating device plugin manager" Dec 13 23:20:14.043580 kubelet[2680]: I1213 23:20:14.043566 2680 state_mem.go:36] "Initialized new in-memory state store" Dec 13 23:20:14.043706 kubelet[2680]: I1213 23:20:14.043694 2680 kubelet.go:446] "Attempting to sync node with API server" Dec 13 23:20:14.043735 kubelet[2680]: I1213 23:20:14.043709 2680 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 23:20:14.043735 kubelet[2680]: I1213 23:20:14.043733 2680 kubelet.go:352] "Adding apiserver pod source" Dec 13 23:20:14.043774 kubelet[2680]: I1213 23:20:14.043746 2680 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 23:20:14.045205 kubelet[2680]: I1213 23:20:14.045181 2680 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 13 23:20:14.045682 kubelet[2680]: I1213 23:20:14.045669 2680 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 23:20:14.046153 kubelet[2680]: I1213 23:20:14.046132 2680 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 13 23:20:14.046233 kubelet[2680]: I1213 23:20:14.046167 2680 server.go:1287] "Started kubelet" Dec 13 23:20:14.046349 kubelet[2680]: I1213 23:20:14.046324 2680 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 23:20:14.046545 kubelet[2680]: I1213 23:20:14.046486 2680 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 23:20:14.046790 kubelet[2680]: I1213 23:20:14.046765 2680 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 23:20:14.047774 kubelet[2680]: I1213 23:20:14.047760 2680 server.go:479] "Adding debug handlers to kubelet server" Dec 13 23:20:14.050040 kubelet[2680]: I1213 23:20:14.049697 2680 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 23:20:14.056561 kubelet[2680]: E1213 23:20:14.056327 2680 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 23:20:14.057822 kubelet[2680]: I1213 23:20:14.057796 2680 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 23:20:14.059297 kubelet[2680]: I1213 23:20:14.059270 2680 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 13 23:20:14.059519 kubelet[2680]: E1213 23:20:14.059495 2680 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 23:20:14.060418 kubelet[2680]: I1213 23:20:14.060398 2680 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 13 23:20:14.060824 kubelet[2680]: I1213 23:20:14.060783 2680 reconciler.go:26] "Reconciler: start to sync state" Dec 13 23:20:14.065438 kubelet[2680]: I1213 23:20:14.065406 2680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 23:20:14.068099 kubelet[2680]: I1213 23:20:14.068058 2680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 23:20:14.068099 kubelet[2680]: I1213 23:20:14.068083 2680 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 13 23:20:14.068189 kubelet[2680]: I1213 23:20:14.068111 2680 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 13 23:20:14.068189 kubelet[2680]: I1213 23:20:14.068119 2680 kubelet.go:2382] "Starting kubelet main sync loop" Dec 13 23:20:14.068189 kubelet[2680]: E1213 23:20:14.068160 2680 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 23:20:14.076631 kubelet[2680]: I1213 23:20:14.076552 2680 factory.go:221] Registration of the containerd container factory successfully Dec 13 23:20:14.076631 kubelet[2680]: I1213 23:20:14.076575 2680 factory.go:221] Registration of the systemd container factory successfully Dec 13 23:20:14.076776 kubelet[2680]: I1213 23:20:14.076746 2680 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 23:20:14.113063 kubelet[2680]: I1213 23:20:14.113035 2680 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 13 23:20:14.113266 kubelet[2680]: I1213 23:20:14.113245 2680 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 13 23:20:14.113350 kubelet[2680]: I1213 23:20:14.113340 2680 state_mem.go:36] "Initialized new in-memory state store" Dec 13 23:20:14.113563 kubelet[2680]: I1213 23:20:14.113547 2680 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 23:20:14.113638 kubelet[2680]: I1213 23:20:14.113616 2680 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 23:20:14.113684 kubelet[2680]: I1213 23:20:14.113677 2680 policy_none.go:49] "None policy: Start" Dec 13 23:20:14.113727 kubelet[2680]: I1213 23:20:14.113721 2680 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 13 23:20:14.113771 kubelet[2680]: I1213 23:20:14.113765 2680 state_mem.go:35] "Initializing new in-memory state store" Dec 13 23:20:14.113937 kubelet[2680]: I1213 23:20:14.113922 2680 state_mem.go:75] "Updated machine memory state" Dec 13 23:20:14.117857 kubelet[2680]: I1213 23:20:14.117826 2680 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 23:20:14.118047 kubelet[2680]: I1213 23:20:14.118033 2680 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 23:20:14.118097 kubelet[2680]: I1213 23:20:14.118048 2680 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 23:20:14.118498 kubelet[2680]: I1213 23:20:14.118224 2680 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 23:20:14.119814 kubelet[2680]: E1213 23:20:14.119313 2680 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 13 23:20:14.169006 kubelet[2680]: I1213 23:20:14.168973 2680 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 23:20:14.169354 kubelet[2680]: I1213 23:20:14.169040 2680 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 23:20:14.169510 kubelet[2680]: I1213 23:20:14.169252 2680 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 23:20:14.222134 kubelet[2680]: I1213 23:20:14.222109 2680 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 23:20:14.231481 kubelet[2680]: I1213 23:20:14.231447 2680 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 13 23:20:14.231609 kubelet[2680]: I1213 23:20:14.231533 2680 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 13 23:20:14.361662 kubelet[2680]: I1213 23:20:14.361415 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43d30e7ed0f944f0e0d7ff401d5c45ca-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"43d30e7ed0f944f0e0d7ff401d5c45ca\") " pod="kube-system/kube-apiserver-localhost" Dec 13 23:20:14.361662 kubelet[2680]: I1213 23:20:14.361450 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43d30e7ed0f944f0e0d7ff401d5c45ca-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"43d30e7ed0f944f0e0d7ff401d5c45ca\") " pod="kube-system/kube-apiserver-localhost" Dec 13 23:20:14.361662 kubelet[2680]: I1213 23:20:14.361468 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:20:14.361662 kubelet[2680]: I1213 23:20:14.361508 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:20:14.361662 kubelet[2680]: I1213 23:20:14.361528 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:20:14.361847 kubelet[2680]: I1213 23:20:14.361571 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 13 23:20:14.361847 kubelet[2680]: I1213 23:20:14.361616 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43d30e7ed0f944f0e0d7ff401d5c45ca-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"43d30e7ed0f944f0e0d7ff401d5c45ca\") " pod="kube-system/kube-apiserver-localhost" Dec 13 23:20:14.361847 kubelet[2680]: I1213 23:20:14.361636 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:20:14.361847 kubelet[2680]: I1213 23:20:14.361653 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 23:20:14.475635 kubelet[2680]: E1213 23:20:14.475586 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:14.476188 kubelet[2680]: E1213 23:20:14.476108 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:14.476880 kubelet[2680]: E1213 23:20:14.476849 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:15.045240 kubelet[2680]: I1213 23:20:15.045190 2680 apiserver.go:52] "Watching apiserver" Dec 13 23:20:15.060841 kubelet[2680]: I1213 23:20:15.060653 2680 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 13 23:20:15.087948 kubelet[2680]: I1213 23:20:15.087908 2680 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 23:20:15.088143 kubelet[2680]: I1213 23:20:15.088120 2680 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 23:20:15.088274 kubelet[2680]: E1213 23:20:15.088245 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:15.095627 kubelet[2680]: E1213 23:20:15.095533 2680 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 23:20:15.095835 kubelet[2680]: E1213 23:20:15.095819 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:15.097062 kubelet[2680]: E1213 23:20:15.097027 2680 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 23:20:15.097985 kubelet[2680]: E1213 23:20:15.097948 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:15.106724 kubelet[2680]: I1213 23:20:15.106576 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.106548232 podStartE2EDuration="1.106548232s" podCreationTimestamp="2025-12-13 23:20:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 23:20:15.106408981 +0000 UTC m=+1.132977247" watchObservedRunningTime="2025-12-13 23:20:15.106548232 +0000 UTC m=+1.133116498" Dec 13 23:20:15.129776 kubelet[2680]: I1213 23:20:15.129160 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.129123069 podStartE2EDuration="1.129123069s" podCreationTimestamp="2025-12-13 23:20:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 23:20:15.128113444 +0000 UTC m=+1.154681710" watchObservedRunningTime="2025-12-13 23:20:15.129123069 +0000 UTC m=+1.155691335" Dec 13 23:20:15.129776 kubelet[2680]: I1213 23:20:15.129290 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.1292861140000001 podStartE2EDuration="1.129286114s" podCreationTimestamp="2025-12-13 23:20:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 23:20:15.113931621 +0000 UTC m=+1.140499927" watchObservedRunningTime="2025-12-13 23:20:15.129286114 +0000 UTC m=+1.155854380" Dec 13 23:20:15.225337 sudo[1751]: pam_unix(sudo:session): session closed for user root Dec 13 23:20:15.227345 sshd[1750]: Connection closed by 10.0.0.1 port 33922 Dec 13 23:20:15.227758 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Dec 13 23:20:15.231918 systemd[1]: sshd@4-10.0.0.118:22-10.0.0.1:33922.service: Deactivated successfully. Dec 13 23:20:15.233911 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 23:20:15.234179 systemd[1]: session-6.scope: Consumed 5.951s CPU time, 191M memory peak. Dec 13 23:20:15.235099 systemd-logind[1556]: Session 6 logged out. Waiting for processes to exit. Dec 13 23:20:15.236250 systemd-logind[1556]: Removed session 6. Dec 13 23:20:16.089146 kubelet[2680]: E1213 23:20:16.089116 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:16.090082 kubelet[2680]: E1213 23:20:16.089228 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:17.091590 kubelet[2680]: E1213 23:20:17.091468 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:18.033492 kubelet[2680]: I1213 23:20:18.033446 2680 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 23:20:18.033813 containerd[1598]: time="2025-12-13T23:20:18.033766449Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 23:20:18.034226 kubelet[2680]: I1213 23:20:18.033994 2680 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 23:20:18.770661 systemd[1]: Created slice kubepods-burstable-pod628a0d17_c03c_4848_b73e_f2ab3b97f537.slice - libcontainer container kubepods-burstable-pod628a0d17_c03c_4848_b73e_f2ab3b97f537.slice. Dec 13 23:20:18.779169 systemd[1]: Created slice kubepods-besteffort-pod49db3c7b_cd5b_4097_9f8b_133e5028142c.slice - libcontainer container kubepods-besteffort-pod49db3c7b_cd5b_4097_9f8b_133e5028142c.slice. Dec 13 23:20:18.785804 kubelet[2680]: I1213 23:20:18.785687 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8zjb\" (UniqueName: \"kubernetes.io/projected/49db3c7b-cd5b-4097-9f8b-133e5028142c-kube-api-access-w8zjb\") pod \"kube-proxy-vmjxz\" (UID: \"49db3c7b-cd5b-4097-9f8b-133e5028142c\") " pod="kube-system/kube-proxy-vmjxz" Dec 13 23:20:18.785804 kubelet[2680]: I1213 23:20:18.785726 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/628a0d17-c03c-4848-b73e-f2ab3b97f537-run\") pod \"kube-flannel-ds-q5znx\" (UID: \"628a0d17-c03c-4848-b73e-f2ab3b97f537\") " pod="kube-flannel/kube-flannel-ds-q5znx" Dec 13 23:20:18.785804 kubelet[2680]: I1213 23:20:18.785745 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/628a0d17-c03c-4848-b73e-f2ab3b97f537-cni\") pod \"kube-flannel-ds-q5znx\" (UID: \"628a0d17-c03c-4848-b73e-f2ab3b97f537\") " pod="kube-flannel/kube-flannel-ds-q5znx" Dec 13 23:20:18.785804 kubelet[2680]: I1213 23:20:18.785760 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/628a0d17-c03c-4848-b73e-f2ab3b97f537-xtables-lock\") pod \"kube-flannel-ds-q5znx\" (UID: \"628a0d17-c03c-4848-b73e-f2ab3b97f537\") " pod="kube-flannel/kube-flannel-ds-q5znx" Dec 13 23:20:18.786244 kubelet[2680]: I1213 23:20:18.785831 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/49db3c7b-cd5b-4097-9f8b-133e5028142c-kube-proxy\") pod \"kube-proxy-vmjxz\" (UID: \"49db3c7b-cd5b-4097-9f8b-133e5028142c\") " pod="kube-system/kube-proxy-vmjxz" Dec 13 23:20:18.786244 kubelet[2680]: I1213 23:20:18.785872 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/628a0d17-c03c-4848-b73e-f2ab3b97f537-flannel-cfg\") pod \"kube-flannel-ds-q5znx\" (UID: \"628a0d17-c03c-4848-b73e-f2ab3b97f537\") " pod="kube-flannel/kube-flannel-ds-q5znx" Dec 13 23:20:18.786244 kubelet[2680]: I1213 23:20:18.785916 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49db3c7b-cd5b-4097-9f8b-133e5028142c-lib-modules\") pod \"kube-proxy-vmjxz\" (UID: \"49db3c7b-cd5b-4097-9f8b-133e5028142c\") " pod="kube-system/kube-proxy-vmjxz" Dec 13 23:20:18.786244 kubelet[2680]: I1213 23:20:18.785939 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/628a0d17-c03c-4848-b73e-f2ab3b97f537-cni-plugin\") pod \"kube-flannel-ds-q5znx\" (UID: \"628a0d17-c03c-4848-b73e-f2ab3b97f537\") " pod="kube-flannel/kube-flannel-ds-q5znx" Dec 13 23:20:18.786423 kubelet[2680]: I1213 23:20:18.786392 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgz8x\" (UniqueName: \"kubernetes.io/projected/628a0d17-c03c-4848-b73e-f2ab3b97f537-kube-api-access-tgz8x\") pod \"kube-flannel-ds-q5znx\" (UID: \"628a0d17-c03c-4848-b73e-f2ab3b97f537\") " pod="kube-flannel/kube-flannel-ds-q5znx" Dec 13 23:20:18.786483 kubelet[2680]: I1213 23:20:18.786441 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49db3c7b-cd5b-4097-9f8b-133e5028142c-xtables-lock\") pod \"kube-proxy-vmjxz\" (UID: \"49db3c7b-cd5b-4097-9f8b-133e5028142c\") " pod="kube-system/kube-proxy-vmjxz" Dec 13 23:20:18.900558 kubelet[2680]: E1213 23:20:18.900497 2680 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 23:20:18.900558 kubelet[2680]: E1213 23:20:18.900529 2680 projected.go:194] Error preparing data for projected volume kube-api-access-w8zjb for pod kube-system/kube-proxy-vmjxz: configmap "kube-root-ca.crt" not found Dec 13 23:20:18.900723 kubelet[2680]: E1213 23:20:18.900597 2680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/49db3c7b-cd5b-4097-9f8b-133e5028142c-kube-api-access-w8zjb podName:49db3c7b-cd5b-4097-9f8b-133e5028142c nodeName:}" failed. No retries permitted until 2025-12-13 23:20:19.400575303 +0000 UTC m=+5.427143569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w8zjb" (UniqueName: "kubernetes.io/projected/49db3c7b-cd5b-4097-9f8b-133e5028142c-kube-api-access-w8zjb") pod "kube-proxy-vmjxz" (UID: "49db3c7b-cd5b-4097-9f8b-133e5028142c") : configmap "kube-root-ca.crt" not found Dec 13 23:20:19.073999 kubelet[2680]: E1213 23:20:19.073758 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:19.074726 containerd[1598]: time="2025-12-13T23:20:19.074545133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-q5znx,Uid:628a0d17-c03c-4848-b73e-f2ab3b97f537,Namespace:kube-flannel,Attempt:0,}" Dec 13 23:20:19.103233 containerd[1598]: time="2025-12-13T23:20:19.103107599Z" level=info msg="connecting to shim c09d7fc9d773cadf10ce70f6b6a0d3f74ec89300fe4a55efa7250143abbea4f8" address="unix:///run/containerd/s/284926e818bd184f72a2b027b4f5cd433b3ca38698a19d4e21b8a54491d72b64" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:20:19.127187 systemd[1]: Started cri-containerd-c09d7fc9d773cadf10ce70f6b6a0d3f74ec89300fe4a55efa7250143abbea4f8.scope - libcontainer container c09d7fc9d773cadf10ce70f6b6a0d3f74ec89300fe4a55efa7250143abbea4f8. Dec 13 23:20:19.157558 containerd[1598]: time="2025-12-13T23:20:19.157500700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-q5znx,Uid:628a0d17-c03c-4848-b73e-f2ab3b97f537,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"c09d7fc9d773cadf10ce70f6b6a0d3f74ec89300fe4a55efa7250143abbea4f8\"" Dec 13 23:20:19.158439 kubelet[2680]: E1213 23:20:19.158415 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:19.163075 containerd[1598]: time="2025-12-13T23:20:19.162627018Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 23:20:19.690312 kubelet[2680]: E1213 23:20:19.690068 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:19.690722 containerd[1598]: time="2025-12-13T23:20:19.690660443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vmjxz,Uid:49db3c7b-cd5b-4097-9f8b-133e5028142c,Namespace:kube-system,Attempt:0,}" Dec 13 23:20:19.706259 containerd[1598]: time="2025-12-13T23:20:19.706215527Z" level=info msg="connecting to shim 15c1e8d4be5b770ce3cb2c2852ce97c5ef741668d0ee77f62fc91aef1ec7d3b5" address="unix:///run/containerd/s/7b990dd662bb1816fcf057ca0762c61b2a43ead19120633ed38c472ca8962edb" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:20:19.733165 systemd[1]: Started cri-containerd-15c1e8d4be5b770ce3cb2c2852ce97c5ef741668d0ee77f62fc91aef1ec7d3b5.scope - libcontainer container 15c1e8d4be5b770ce3cb2c2852ce97c5ef741668d0ee77f62fc91aef1ec7d3b5. Dec 13 23:20:19.753597 containerd[1598]: time="2025-12-13T23:20:19.753551987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vmjxz,Uid:49db3c7b-cd5b-4097-9f8b-133e5028142c,Namespace:kube-system,Attempt:0,} returns sandbox id \"15c1e8d4be5b770ce3cb2c2852ce97c5ef741668d0ee77f62fc91aef1ec7d3b5\"" Dec 13 23:20:19.754252 kubelet[2680]: E1213 23:20:19.754228 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:19.756500 containerd[1598]: time="2025-12-13T23:20:19.756425715Z" level=info msg="CreateContainer within sandbox \"15c1e8d4be5b770ce3cb2c2852ce97c5ef741668d0ee77f62fc91aef1ec7d3b5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 23:20:19.766440 containerd[1598]: time="2025-12-13T23:20:19.766396557Z" level=info msg="Container 935b4c98c8224291c85fbb242732ca922a2904a7c7e28a9f2b4f642ea7eeb705: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:20:19.774992 containerd[1598]: time="2025-12-13T23:20:19.774947471Z" level=info msg="CreateContainer within sandbox \"15c1e8d4be5b770ce3cb2c2852ce97c5ef741668d0ee77f62fc91aef1ec7d3b5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"935b4c98c8224291c85fbb242732ca922a2904a7c7e28a9f2b4f642ea7eeb705\"" Dec 13 23:20:19.775492 containerd[1598]: time="2025-12-13T23:20:19.775466146Z" level=info msg="StartContainer for \"935b4c98c8224291c85fbb242732ca922a2904a7c7e28a9f2b4f642ea7eeb705\"" Dec 13 23:20:19.776950 containerd[1598]: time="2025-12-13T23:20:19.776801727Z" level=info msg="connecting to shim 935b4c98c8224291c85fbb242732ca922a2904a7c7e28a9f2b4f642ea7eeb705" address="unix:///run/containerd/s/7b990dd662bb1816fcf057ca0762c61b2a43ead19120633ed38c472ca8962edb" protocol=ttrpc version=3 Dec 13 23:20:19.802176 systemd[1]: Started cri-containerd-935b4c98c8224291c85fbb242732ca922a2904a7c7e28a9f2b4f642ea7eeb705.scope - libcontainer container 935b4c98c8224291c85fbb242732ca922a2904a7c7e28a9f2b4f642ea7eeb705. Dec 13 23:20:19.871079 containerd[1598]: time="2025-12-13T23:20:19.871025562Z" level=info msg="StartContainer for \"935b4c98c8224291c85fbb242732ca922a2904a7c7e28a9f2b4f642ea7eeb705\" returns successfully" Dec 13 23:20:20.100326 kubelet[2680]: E1213 23:20:20.099357 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:20.275975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3317996840.mount: Deactivated successfully. Dec 13 23:20:20.307006 containerd[1598]: time="2025-12-13T23:20:20.306947708Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:20:20.307546 containerd[1598]: time="2025-12-13T23:20:20.307499343Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=0" Dec 13 23:20:20.308313 containerd[1598]: time="2025-12-13T23:20:20.308284462Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:20:20.313382 containerd[1598]: time="2025-12-13T23:20:20.313334124Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:20:20.314467 containerd[1598]: time="2025-12-13T23:20:20.314355047Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.151689475s" Dec 13 23:20:20.314467 containerd[1598]: time="2025-12-13T23:20:20.314387642Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Dec 13 23:20:20.317765 containerd[1598]: time="2025-12-13T23:20:20.317721568Z" level=info msg="CreateContainer within sandbox \"c09d7fc9d773cadf10ce70f6b6a0d3f74ec89300fe4a55efa7250143abbea4f8\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 23:20:20.326572 containerd[1598]: time="2025-12-13T23:20:20.326537410Z" level=info msg="Container c3f341738e154673fb1c508f436f1a01ca59c3f64df59be4675ed76130678e83: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:20:20.331412 containerd[1598]: time="2025-12-13T23:20:20.331378784Z" level=info msg="CreateContainer within sandbox \"c09d7fc9d773cadf10ce70f6b6a0d3f74ec89300fe4a55efa7250143abbea4f8\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"c3f341738e154673fb1c508f436f1a01ca59c3f64df59be4675ed76130678e83\"" Dec 13 23:20:20.332031 containerd[1598]: time="2025-12-13T23:20:20.332003688Z" level=info msg="StartContainer for \"c3f341738e154673fb1c508f436f1a01ca59c3f64df59be4675ed76130678e83\"" Dec 13 23:20:20.332782 containerd[1598]: time="2025-12-13T23:20:20.332757732Z" level=info msg="connecting to shim c3f341738e154673fb1c508f436f1a01ca59c3f64df59be4675ed76130678e83" address="unix:///run/containerd/s/284926e818bd184f72a2b027b4f5cd433b3ca38698a19d4e21b8a54491d72b64" protocol=ttrpc version=3 Dec 13 23:20:20.351173 systemd[1]: Started cri-containerd-c3f341738e154673fb1c508f436f1a01ca59c3f64df59be4675ed76130678e83.scope - libcontainer container c3f341738e154673fb1c508f436f1a01ca59c3f64df59be4675ed76130678e83. Dec 13 23:20:20.377178 systemd[1]: cri-containerd-c3f341738e154673fb1c508f436f1a01ca59c3f64df59be4675ed76130678e83.scope: Deactivated successfully. Dec 13 23:20:20.378523 containerd[1598]: time="2025-12-13T23:20:20.377945250Z" level=info msg="StartContainer for \"c3f341738e154673fb1c508f436f1a01ca59c3f64df59be4675ed76130678e83\" returns successfully" Dec 13 23:20:20.380252 containerd[1598]: time="2025-12-13T23:20:20.380196663Z" level=info msg="received container exit event container_id:\"c3f341738e154673fb1c508f436f1a01ca59c3f64df59be4675ed76130678e83\" id:\"c3f341738e154673fb1c508f436f1a01ca59c3f64df59be4675ed76130678e83\" pid:3019 exited_at:{seconds:1765668020 nanos:379382948}" Dec 13 23:20:21.102570 kubelet[2680]: E1213 23:20:21.102544 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:21.104278 containerd[1598]: time="2025-12-13T23:20:21.104228424Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 23:20:21.112857 kubelet[2680]: I1213 23:20:21.112589 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vmjxz" podStartSLOduration=3.112573338 podStartE2EDuration="3.112573338s" podCreationTimestamp="2025-12-13 23:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 23:20:20.10786238 +0000 UTC m=+6.134430646" watchObservedRunningTime="2025-12-13 23:20:21.112573338 +0000 UTC m=+7.139141604" Dec 13 23:20:22.291596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3618913654.mount: Deactivated successfully. Dec 13 23:20:22.319262 kubelet[2680]: E1213 23:20:22.319229 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:23.106251 kubelet[2680]: E1213 23:20:23.106199 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:23.525031 containerd[1598]: time="2025-12-13T23:20:23.524919989Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:20:23.526457 containerd[1598]: time="2025-12-13T23:20:23.526111518Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=15903163" Dec 13 23:20:23.527413 containerd[1598]: time="2025-12-13T23:20:23.527380557Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:20:23.530141 containerd[1598]: time="2025-12-13T23:20:23.530110410Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 23:20:23.532017 containerd[1598]: time="2025-12-13T23:20:23.531988612Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 2.427720274s" Dec 13 23:20:23.532067 containerd[1598]: time="2025-12-13T23:20:23.532023248Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Dec 13 23:20:23.541294 containerd[1598]: time="2025-12-13T23:20:23.541258635Z" level=info msg="CreateContainer within sandbox \"c09d7fc9d773cadf10ce70f6b6a0d3f74ec89300fe4a55efa7250143abbea4f8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 23:20:23.548474 containerd[1598]: time="2025-12-13T23:20:23.548155600Z" level=info msg="Container c7be6e62016001e0b92d038460e57878792a872a3487ad40630e584ebed124d7: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:20:23.554519 containerd[1598]: time="2025-12-13T23:20:23.554472198Z" level=info msg="CreateContainer within sandbox \"c09d7fc9d773cadf10ce70f6b6a0d3f74ec89300fe4a55efa7250143abbea4f8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c7be6e62016001e0b92d038460e57878792a872a3487ad40630e584ebed124d7\"" Dec 13 23:20:23.555112 containerd[1598]: time="2025-12-13T23:20:23.555088440Z" level=info msg="StartContainer for \"c7be6e62016001e0b92d038460e57878792a872a3487ad40630e584ebed124d7\"" Dec 13 23:20:23.556048 containerd[1598]: time="2025-12-13T23:20:23.556021801Z" level=info msg="connecting to shim c7be6e62016001e0b92d038460e57878792a872a3487ad40630e584ebed124d7" address="unix:///run/containerd/s/284926e818bd184f72a2b027b4f5cd433b3ca38698a19d4e21b8a54491d72b64" protocol=ttrpc version=3 Dec 13 23:20:23.588181 systemd[1]: Started cri-containerd-c7be6e62016001e0b92d038460e57878792a872a3487ad40630e584ebed124d7.scope - libcontainer container c7be6e62016001e0b92d038460e57878792a872a3487ad40630e584ebed124d7. Dec 13 23:20:23.616633 systemd[1]: cri-containerd-c7be6e62016001e0b92d038460e57878792a872a3487ad40630e584ebed124d7.scope: Deactivated successfully. Dec 13 23:20:23.617756 containerd[1598]: time="2025-12-13T23:20:23.617701052Z" level=info msg="received container exit event container_id:\"c7be6e62016001e0b92d038460e57878792a872a3487ad40630e584ebed124d7\" id:\"c7be6e62016001e0b92d038460e57878792a872a3487ad40630e584ebed124d7\" pid:3099 exited_at:{seconds:1765668023 nanos:616811045}" Dec 13 23:20:23.622805 kubelet[2680]: E1213 23:20:23.622780 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:23.634901 containerd[1598]: time="2025-12-13T23:20:23.634864433Z" level=info msg="StartContainer for \"c7be6e62016001e0b92d038460e57878792a872a3487ad40630e584ebed124d7\" returns successfully" Dec 13 23:20:23.650233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7be6e62016001e0b92d038460e57878792a872a3487ad40630e584ebed124d7-rootfs.mount: Deactivated successfully. Dec 13 23:20:23.676298 kubelet[2680]: I1213 23:20:23.676270 2680 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 13 23:20:23.779876 systemd[1]: Created slice kubepods-burstable-pod6362683e_b03c_4683_9af2_d86253fdd186.slice - libcontainer container kubepods-burstable-pod6362683e_b03c_4683_9af2_d86253fdd186.slice. Dec 13 23:20:23.785792 systemd[1]: Created slice kubepods-burstable-pod91ea23ed_74ff_4704_9bd2_c0014357b9c1.slice - libcontainer container kubepods-burstable-pod91ea23ed_74ff_4704_9bd2_c0014357b9c1.slice. Dec 13 23:20:23.831096 kubelet[2680]: I1213 23:20:23.831051 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8klv\" (UniqueName: \"kubernetes.io/projected/91ea23ed-74ff-4704-9bd2-c0014357b9c1-kube-api-access-t8klv\") pod \"coredns-668d6bf9bc-wgmdr\" (UID: \"91ea23ed-74ff-4704-9bd2-c0014357b9c1\") " pod="kube-system/coredns-668d6bf9bc-wgmdr" Dec 13 23:20:23.831096 kubelet[2680]: I1213 23:20:23.831090 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91ea23ed-74ff-4704-9bd2-c0014357b9c1-config-volume\") pod \"coredns-668d6bf9bc-wgmdr\" (UID: \"91ea23ed-74ff-4704-9bd2-c0014357b9c1\") " pod="kube-system/coredns-668d6bf9bc-wgmdr" Dec 13 23:20:23.831271 kubelet[2680]: I1213 23:20:23.831116 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx2rr\" (UniqueName: \"kubernetes.io/projected/6362683e-b03c-4683-9af2-d86253fdd186-kube-api-access-hx2rr\") pod \"coredns-668d6bf9bc-x74kt\" (UID: \"6362683e-b03c-4683-9af2-d86253fdd186\") " pod="kube-system/coredns-668d6bf9bc-x74kt" Dec 13 23:20:23.831271 kubelet[2680]: I1213 23:20:23.831146 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6362683e-b03c-4683-9af2-d86253fdd186-config-volume\") pod \"coredns-668d6bf9bc-x74kt\" (UID: \"6362683e-b03c-4683-9af2-d86253fdd186\") " pod="kube-system/coredns-668d6bf9bc-x74kt" Dec 13 23:20:24.084494 kubelet[2680]: E1213 23:20:24.084402 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:24.085223 containerd[1598]: time="2025-12-13T23:20:24.085188145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x74kt,Uid:6362683e-b03c-4683-9af2-d86253fdd186,Namespace:kube-system,Attempt:0,}" Dec 13 23:20:24.089152 kubelet[2680]: E1213 23:20:24.089076 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:24.090195 containerd[1598]: time="2025-12-13T23:20:24.090144715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wgmdr,Uid:91ea23ed-74ff-4704-9bd2-c0014357b9c1,Namespace:kube-system,Attempt:0,}" Dec 13 23:20:24.112290 kubelet[2680]: E1213 23:20:24.112216 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:24.113083 kubelet[2680]: E1213 23:20:24.112390 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:24.116601 containerd[1598]: time="2025-12-13T23:20:24.116457224Z" level=info msg="CreateContainer within sandbox \"c09d7fc9d773cadf10ce70f6b6a0d3f74ec89300fe4a55efa7250143abbea4f8\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 23:20:24.119300 containerd[1598]: time="2025-12-13T23:20:24.119244332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x74kt,Uid:6362683e-b03c-4683-9af2-d86253fdd186,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c08a2a1fec74d32e61a4a7e3cc3e53536b5dd8a5c9570b4e8681af3b159d565\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 23:20:24.121154 kubelet[2680]: E1213 23:20:24.121109 2680 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c08a2a1fec74d32e61a4a7e3cc3e53536b5dd8a5c9570b4e8681af3b159d565\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 23:20:24.121371 kubelet[2680]: E1213 23:20:24.121266 2680 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c08a2a1fec74d32e61a4a7e3cc3e53536b5dd8a5c9570b4e8681af3b159d565\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-x74kt" Dec 13 23:20:24.121371 kubelet[2680]: E1213 23:20:24.121290 2680 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c08a2a1fec74d32e61a4a7e3cc3e53536b5dd8a5c9570b4e8681af3b159d565\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-x74kt" Dec 13 23:20:24.121371 kubelet[2680]: E1213 23:20:24.121334 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-x74kt_kube-system(6362683e-b03c-4683-9af2-d86253fdd186)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-x74kt_kube-system(6362683e-b03c-4683-9af2-d86253fdd186)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c08a2a1fec74d32e61a4a7e3cc3e53536b5dd8a5c9570b4e8681af3b159d565\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-x74kt" podUID="6362683e-b03c-4683-9af2-d86253fdd186" Dec 13 23:20:24.131961 containerd[1598]: time="2025-12-13T23:20:24.131904665Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wgmdr,Uid:91ea23ed-74ff-4704-9bd2-c0014357b9c1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c613e19e4a98ba2703ba2ffd9881ec7085299cd54f5e47d623056bdedc48b77f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 23:20:24.134070 kubelet[2680]: E1213 23:20:24.133704 2680 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c613e19e4a98ba2703ba2ffd9881ec7085299cd54f5e47d623056bdedc48b77f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 23:20:24.134070 kubelet[2680]: E1213 23:20:24.133748 2680 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c613e19e4a98ba2703ba2ffd9881ec7085299cd54f5e47d623056bdedc48b77f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-wgmdr" Dec 13 23:20:24.134070 kubelet[2680]: E1213 23:20:24.133769 2680 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c613e19e4a98ba2703ba2ffd9881ec7085299cd54f5e47d623056bdedc48b77f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-wgmdr" Dec 13 23:20:24.134070 kubelet[2680]: E1213 23:20:24.133808 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wgmdr_kube-system(91ea23ed-74ff-4704-9bd2-c0014357b9c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wgmdr_kube-system(91ea23ed-74ff-4704-9bd2-c0014357b9c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c613e19e4a98ba2703ba2ffd9881ec7085299cd54f5e47d623056bdedc48b77f\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-wgmdr" podUID="91ea23ed-74ff-4704-9bd2-c0014357b9c1" Dec 13 23:20:24.137630 containerd[1598]: time="2025-12-13T23:20:24.137597428Z" level=info msg="Container ad8fb15cf89adf9cae5a5db17ce38c3cfc9211f66117db1a0818e50bb33c1842: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:20:24.145681 containerd[1598]: time="2025-12-13T23:20:24.145625673Z" level=info msg="CreateContainer within sandbox \"c09d7fc9d773cadf10ce70f6b6a0d3f74ec89300fe4a55efa7250143abbea4f8\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"ad8fb15cf89adf9cae5a5db17ce38c3cfc9211f66117db1a0818e50bb33c1842\"" Dec 13 23:20:24.146305 containerd[1598]: time="2025-12-13T23:20:24.146281475Z" level=info msg="StartContainer for \"ad8fb15cf89adf9cae5a5db17ce38c3cfc9211f66117db1a0818e50bb33c1842\"" Dec 13 23:20:24.147696 containerd[1598]: time="2025-12-13T23:20:24.147669589Z" level=info msg="connecting to shim ad8fb15cf89adf9cae5a5db17ce38c3cfc9211f66117db1a0818e50bb33c1842" address="unix:///run/containerd/s/284926e818bd184f72a2b027b4f5cd433b3ca38698a19d4e21b8a54491d72b64" protocol=ttrpc version=3 Dec 13 23:20:24.173167 systemd[1]: Started cri-containerd-ad8fb15cf89adf9cae5a5db17ce38c3cfc9211f66117db1a0818e50bb33c1842.scope - libcontainer container ad8fb15cf89adf9cae5a5db17ce38c3cfc9211f66117db1a0818e50bb33c1842. Dec 13 23:20:24.200279 containerd[1598]: time="2025-12-13T23:20:24.200245892Z" level=info msg="StartContainer for \"ad8fb15cf89adf9cae5a5db17ce38c3cfc9211f66117db1a0818e50bb33c1842\" returns successfully" Dec 13 23:20:25.118214 kubelet[2680]: E1213 23:20:25.117599 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:25.131224 kubelet[2680]: I1213 23:20:25.131121 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-q5znx" podStartSLOduration=2.757125607 podStartE2EDuration="7.131103882s" podCreationTimestamp="2025-12-13 23:20:18 +0000 UTC" firstStartedPulling="2025-12-13 23:20:19.162265277 +0000 UTC m=+5.188833503" lastFinishedPulling="2025-12-13 23:20:23.536243552 +0000 UTC m=+9.562811778" observedRunningTime="2025-12-13 23:20:25.130783918 +0000 UTC m=+11.157352184" watchObservedRunningTime="2025-12-13 23:20:25.131103882 +0000 UTC m=+11.157672148" Dec 13 23:20:25.258770 systemd-networkd[1290]: flannel.1: Link UP Dec 13 23:20:25.258777 systemd-networkd[1290]: flannel.1: Gained carrier Dec 13 23:20:26.119379 kubelet[2680]: E1213 23:20:26.119321 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:26.214947 kubelet[2680]: E1213 23:20:26.214893 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:26.910407 systemd-networkd[1290]: flannel.1: Gained IPv6LL Dec 13 23:20:31.564633 update_engine[1559]: I20251213 23:20:31.564551 1559 update_attempter.cc:509] Updating boot flags... Dec 13 23:20:36.071351 kubelet[2680]: E1213 23:20:36.071319 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:36.071897 containerd[1598]: time="2025-12-13T23:20:36.071626632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wgmdr,Uid:91ea23ed-74ff-4704-9bd2-c0014357b9c1,Namespace:kube-system,Attempt:0,}" Dec 13 23:20:36.089122 systemd-networkd[1290]: cni0: Link UP Dec 13 23:20:36.089128 systemd-networkd[1290]: cni0: Gained carrier Dec 13 23:20:36.094814 systemd-networkd[1290]: cni0: Lost carrier Dec 13 23:20:36.095508 systemd-networkd[1290]: vethdc4d907b: Link UP Dec 13 23:20:36.097294 kernel: cni0: port 1(vethdc4d907b) entered blocking state Dec 13 23:20:36.097375 kernel: cni0: port 1(vethdc4d907b) entered disabled state Dec 13 23:20:36.097406 kernel: vethdc4d907b: entered allmulticast mode Dec 13 23:20:36.098972 kernel: vethdc4d907b: entered promiscuous mode Dec 13 23:20:36.107331 kernel: cni0: port 1(vethdc4d907b) entered blocking state Dec 13 23:20:36.107453 kernel: cni0: port 1(vethdc4d907b) entered forwarding state Dec 13 23:20:36.107468 systemd-networkd[1290]: vethdc4d907b: Gained carrier Dec 13 23:20:36.108002 systemd-networkd[1290]: cni0: Gained carrier Dec 13 23:20:36.110327 containerd[1598]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001c938), "name":"cbr0", "type":"bridge"} Dec 13 23:20:36.110327 containerd[1598]: delegateAdd: netconf sent to delegate plugin: Dec 13 23:20:36.143462 containerd[1598]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-12-13T23:20:36.143409934Z" level=info msg="connecting to shim 8307e34ca111376f9121729ab91320f7b372149b7897f3fa5bca36ad25c294c6" address="unix:///run/containerd/s/59c2bf08a64b5610e88cdc5ae7b79edefbfc36430171735da61e0645422bc930" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:20:36.173188 systemd[1]: Started cri-containerd-8307e34ca111376f9121729ab91320f7b372149b7897f3fa5bca36ad25c294c6.scope - libcontainer container 8307e34ca111376f9121729ab91320f7b372149b7897f3fa5bca36ad25c294c6. Dec 13 23:20:36.184254 systemd-resolved[1252]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 23:20:36.205841 containerd[1598]: time="2025-12-13T23:20:36.205787272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wgmdr,Uid:91ea23ed-74ff-4704-9bd2-c0014357b9c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"8307e34ca111376f9121729ab91320f7b372149b7897f3fa5bca36ad25c294c6\"" Dec 13 23:20:36.208561 kubelet[2680]: E1213 23:20:36.208537 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:36.218188 containerd[1598]: time="2025-12-13T23:20:36.218139995Z" level=info msg="CreateContainer within sandbox \"8307e34ca111376f9121729ab91320f7b372149b7897f3fa5bca36ad25c294c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 23:20:36.228619 containerd[1598]: time="2025-12-13T23:20:36.228286278Z" level=info msg="Container cbea63691a2e688ca76bc49bce9a3a6f67cce191427ecbae9872dd23c43f5c37: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:20:36.234142 containerd[1598]: time="2025-12-13T23:20:36.234090720Z" level=info msg="CreateContainer within sandbox \"8307e34ca111376f9121729ab91320f7b372149b7897f3fa5bca36ad25c294c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cbea63691a2e688ca76bc49bce9a3a6f67cce191427ecbae9872dd23c43f5c37\"" Dec 13 23:20:36.234918 containerd[1598]: time="2025-12-13T23:20:36.234887676Z" level=info msg="StartContainer for \"cbea63691a2e688ca76bc49bce9a3a6f67cce191427ecbae9872dd23c43f5c37\"" Dec 13 23:20:36.235945 containerd[1598]: time="2025-12-13T23:20:36.235901341Z" level=info msg="connecting to shim cbea63691a2e688ca76bc49bce9a3a6f67cce191427ecbae9872dd23c43f5c37" address="unix:///run/containerd/s/59c2bf08a64b5610e88cdc5ae7b79edefbfc36430171735da61e0645422bc930" protocol=ttrpc version=3 Dec 13 23:20:36.261239 systemd[1]: Started cri-containerd-cbea63691a2e688ca76bc49bce9a3a6f67cce191427ecbae9872dd23c43f5c37.scope - libcontainer container cbea63691a2e688ca76bc49bce9a3a6f67cce191427ecbae9872dd23c43f5c37. Dec 13 23:20:36.293089 containerd[1598]: time="2025-12-13T23:20:36.293038047Z" level=info msg="StartContainer for \"cbea63691a2e688ca76bc49bce9a3a6f67cce191427ecbae9872dd23c43f5c37\" returns successfully" Dec 13 23:20:37.148632 kubelet[2680]: E1213 23:20:37.148589 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:37.159649 kubelet[2680]: I1213 23:20:37.159026 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wgmdr" podStartSLOduration=18.159004274 podStartE2EDuration="18.159004274s" podCreationTimestamp="2025-12-13 23:20:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 23:20:37.158192076 +0000 UTC m=+23.184760422" watchObservedRunningTime="2025-12-13 23:20:37.159004274 +0000 UTC m=+23.185572580" Dec 13 23:20:37.343126 systemd[1]: Started sshd@5-10.0.0.118:22-10.0.0.1:49420.service - OpenSSH per-connection server daemon (10.0.0.1:49420). Dec 13 23:20:37.404982 sshd[3492]: Accepted publickey for core from 10.0.0.1 port 49420 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 23:20:37.405071 systemd-networkd[1290]: vethdc4d907b: Gained IPv6LL Dec 13 23:20:37.406833 sshd-session[3492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:20:37.411684 systemd-logind[1556]: New session 7 of user core. Dec 13 23:20:37.425203 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 23:20:37.506555 sshd[3496]: Connection closed by 10.0.0.1 port 49420 Dec 13 23:20:37.507263 sshd-session[3492]: pam_unix(sshd:session): session closed for user core Dec 13 23:20:37.511143 systemd[1]: sshd@5-10.0.0.118:22-10.0.0.1:49420.service: Deactivated successfully. Dec 13 23:20:37.513193 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 23:20:37.514109 systemd-logind[1556]: Session 7 logged out. Waiting for processes to exit. Dec 13 23:20:37.515435 systemd-logind[1556]: Removed session 7. Dec 13 23:20:38.071584 kubelet[2680]: E1213 23:20:38.071442 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:38.072075 containerd[1598]: time="2025-12-13T23:20:38.072010032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x74kt,Uid:6362683e-b03c-4683-9af2-d86253fdd186,Namespace:kube-system,Attempt:0,}" Dec 13 23:20:38.085435 systemd-networkd[1290]: veth9c889cf8: Link UP Dec 13 23:20:38.087156 kernel: cni0: port 2(veth9c889cf8) entered blocking state Dec 13 23:20:38.087232 kernel: cni0: port 2(veth9c889cf8) entered disabled state Dec 13 23:20:38.087250 kernel: veth9c889cf8: entered allmulticast mode Dec 13 23:20:38.088984 kernel: veth9c889cf8: entered promiscuous mode Dec 13 23:20:38.095306 kernel: cni0: port 2(veth9c889cf8) entered blocking state Dec 13 23:20:38.095384 kernel: cni0: port 2(veth9c889cf8) entered forwarding state Dec 13 23:20:38.095394 systemd-networkd[1290]: veth9c889cf8: Gained carrier Dec 13 23:20:38.097320 containerd[1598]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400009e8e8), "name":"cbr0", "type":"bridge"} Dec 13 23:20:38.097320 containerd[1598]: delegateAdd: netconf sent to delegate plugin: Dec 13 23:20:38.110105 systemd-networkd[1290]: cni0: Gained IPv6LL Dec 13 23:20:38.118511 containerd[1598]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-12-13T23:20:38.118452833Z" level=info msg="connecting to shim f900df68565273337af8580f052c85f03ec179850f8dd15effe752cef82bb722" address="unix:///run/containerd/s/f9551877b2d2ddc7fb50c1280d1ee07334d408ec8cdba4913dc9818f240a6139" namespace=k8s.io protocol=ttrpc version=3 Dec 13 23:20:38.150158 kubelet[2680]: E1213 23:20:38.149784 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:38.152234 systemd[1]: Started cri-containerd-f900df68565273337af8580f052c85f03ec179850f8dd15effe752cef82bb722.scope - libcontainer container f900df68565273337af8580f052c85f03ec179850f8dd15effe752cef82bb722. Dec 13 23:20:38.164484 systemd-resolved[1252]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 23:20:38.185013 containerd[1598]: time="2025-12-13T23:20:38.184945708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x74kt,Uid:6362683e-b03c-4683-9af2-d86253fdd186,Namespace:kube-system,Attempt:0,} returns sandbox id \"f900df68565273337af8580f052c85f03ec179850f8dd15effe752cef82bb722\"" Dec 13 23:20:38.185759 kubelet[2680]: E1213 23:20:38.185738 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:38.188229 containerd[1598]: time="2025-12-13T23:20:38.188187351Z" level=info msg="CreateContainer within sandbox \"f900df68565273337af8580f052c85f03ec179850f8dd15effe752cef82bb722\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 23:20:38.196855 containerd[1598]: time="2025-12-13T23:20:38.196262802Z" level=info msg="Container 1ad3c91881e2a135c4644c53ba2a0b5709097600183ce700866c20adce1c573f: CDI devices from CRI Config.CDIDevices: []" Dec 13 23:20:38.199711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount842347338.mount: Deactivated successfully. Dec 13 23:20:38.205134 containerd[1598]: time="2025-12-13T23:20:38.205000501Z" level=info msg="CreateContainer within sandbox \"f900df68565273337af8580f052c85f03ec179850f8dd15effe752cef82bb722\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ad3c91881e2a135c4644c53ba2a0b5709097600183ce700866c20adce1c573f\"" Dec 13 23:20:38.205876 containerd[1598]: time="2025-12-13T23:20:38.205820421Z" level=info msg="StartContainer for \"1ad3c91881e2a135c4644c53ba2a0b5709097600183ce700866c20adce1c573f\"" Dec 13 23:20:38.206765 containerd[1598]: time="2025-12-13T23:20:38.206739577Z" level=info msg="connecting to shim 1ad3c91881e2a135c4644c53ba2a0b5709097600183ce700866c20adce1c573f" address="unix:///run/containerd/s/f9551877b2d2ddc7fb50c1280d1ee07334d408ec8cdba4913dc9818f240a6139" protocol=ttrpc version=3 Dec 13 23:20:38.229195 systemd[1]: Started cri-containerd-1ad3c91881e2a135c4644c53ba2a0b5709097600183ce700866c20adce1c573f.scope - libcontainer container 1ad3c91881e2a135c4644c53ba2a0b5709097600183ce700866c20adce1c573f. Dec 13 23:20:38.258169 containerd[1598]: time="2025-12-13T23:20:38.258124460Z" level=info msg="StartContainer for \"1ad3c91881e2a135c4644c53ba2a0b5709097600183ce700866c20adce1c573f\" returns successfully" Dec 13 23:20:39.153930 kubelet[2680]: E1213 23:20:39.153897 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:39.154289 kubelet[2680]: E1213 23:20:39.154190 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:39.166294 kubelet[2680]: I1213 23:20:39.165350 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-x74kt" podStartSLOduration=20.165328539 podStartE2EDuration="20.165328539s" podCreationTimestamp="2025-12-13 23:20:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 23:20:39.164697168 +0000 UTC m=+25.191265474" watchObservedRunningTime="2025-12-13 23:20:39.165328539 +0000 UTC m=+25.191896845" Dec 13 23:20:39.325148 systemd-networkd[1290]: veth9c889cf8: Gained IPv6LL Dec 13 23:20:40.155190 kubelet[2680]: E1213 23:20:40.155163 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:41.156417 kubelet[2680]: E1213 23:20:41.156389 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 23:20:42.524976 systemd[1]: Started sshd@6-10.0.0.118:22-10.0.0.1:47702.service - OpenSSH per-connection server daemon (10.0.0.1:47702). Dec 13 23:20:42.571869 sshd[3655]: Accepted publickey for core from 10.0.0.1 port 47702 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 23:20:42.573187 sshd-session[3655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:20:42.577625 systemd-logind[1556]: New session 8 of user core. Dec 13 23:20:42.587146 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 23:20:42.659068 sshd[3659]: Connection closed by 10.0.0.1 port 47702 Dec 13 23:20:42.659703 sshd-session[3655]: pam_unix(sshd:session): session closed for user core Dec 13 23:20:42.663535 systemd[1]: sshd@6-10.0.0.118:22-10.0.0.1:47702.service: Deactivated successfully. Dec 13 23:20:42.665567 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 23:20:42.666487 systemd-logind[1556]: Session 8 logged out. Waiting for processes to exit. Dec 13 23:20:42.667485 systemd-logind[1556]: Removed session 8. Dec 13 23:20:47.681265 systemd[1]: Started sshd@7-10.0.0.118:22-10.0.0.1:47706.service - OpenSSH per-connection server daemon (10.0.0.1:47706). Dec 13 23:20:47.726591 sshd[3696]: Accepted publickey for core from 10.0.0.1 port 47706 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 23:20:47.728105 sshd-session[3696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:20:47.732014 systemd-logind[1556]: New session 9 of user core. Dec 13 23:20:47.747125 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 23:20:47.820036 sshd[3700]: Connection closed by 10.0.0.1 port 47706 Dec 13 23:20:47.820306 sshd-session[3696]: pam_unix(sshd:session): session closed for user core Dec 13 23:20:47.833109 systemd[1]: sshd@7-10.0.0.118:22-10.0.0.1:47706.service: Deactivated successfully. Dec 13 23:20:47.834714 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 23:20:47.836473 systemd-logind[1556]: Session 9 logged out. Waiting for processes to exit. Dec 13 23:20:47.838795 systemd[1]: Started sshd@8-10.0.0.118:22-10.0.0.1:47710.service - OpenSSH per-connection server daemon (10.0.0.1:47710). Dec 13 23:20:47.839442 systemd-logind[1556]: Removed session 9. Dec 13 23:20:47.903303 sshd[3714]: Accepted publickey for core from 10.0.0.1 port 47710 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 23:20:47.905204 sshd-session[3714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:20:47.909681 systemd-logind[1556]: New session 10 of user core. Dec 13 23:20:47.919137 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 23:20:48.028570 sshd[3718]: Connection closed by 10.0.0.1 port 47710 Dec 13 23:20:48.027846 sshd-session[3714]: pam_unix(sshd:session): session closed for user core Dec 13 23:20:48.041568 systemd[1]: sshd@8-10.0.0.118:22-10.0.0.1:47710.service: Deactivated successfully. Dec 13 23:20:48.043145 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 23:20:48.048015 systemd-logind[1556]: Session 10 logged out. Waiting for processes to exit. Dec 13 23:20:48.049157 systemd[1]: Started sshd@9-10.0.0.118:22-10.0.0.1:47718.service - OpenSSH per-connection server daemon (10.0.0.1:47718). Dec 13 23:20:48.050868 systemd-logind[1556]: Removed session 10. Dec 13 23:20:48.115476 sshd[3729]: Accepted publickey for core from 10.0.0.1 port 47718 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 23:20:48.116900 sshd-session[3729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:20:48.120826 systemd-logind[1556]: New session 11 of user core. Dec 13 23:20:48.129157 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 23:20:48.201362 sshd[3733]: Connection closed by 10.0.0.1 port 47718 Dec 13 23:20:48.201315 sshd-session[3729]: pam_unix(sshd:session): session closed for user core Dec 13 23:20:48.204692 systemd[1]: sshd@9-10.0.0.118:22-10.0.0.1:47718.service: Deactivated successfully. Dec 13 23:20:48.207870 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 23:20:48.210322 systemd-logind[1556]: Session 11 logged out. Waiting for processes to exit. Dec 13 23:20:48.211271 systemd-logind[1556]: Removed session 11. Dec 13 23:20:53.212733 systemd[1]: Started sshd@10-10.0.0.118:22-10.0.0.1:55424.service - OpenSSH per-connection server daemon (10.0.0.1:55424). Dec 13 23:20:53.275553 sshd[3769]: Accepted publickey for core from 10.0.0.1 port 55424 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 23:20:53.276878 sshd-session[3769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:20:53.281203 systemd-logind[1556]: New session 12 of user core. Dec 13 23:20:53.292126 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 23:20:53.364990 sshd[3773]: Connection closed by 10.0.0.1 port 55424 Dec 13 23:20:53.363703 sshd-session[3769]: pam_unix(sshd:session): session closed for user core Dec 13 23:20:53.374602 systemd[1]: sshd@10-10.0.0.118:22-10.0.0.1:55424.service: Deactivated successfully. Dec 13 23:20:53.376544 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 23:20:53.377771 systemd-logind[1556]: Session 12 logged out. Waiting for processes to exit. Dec 13 23:20:53.380102 systemd[1]: Started sshd@11-10.0.0.118:22-10.0.0.1:55430.service - OpenSSH per-connection server daemon (10.0.0.1:55430). Dec 13 23:20:53.381853 systemd-logind[1556]: Removed session 12. Dec 13 23:20:53.436551 sshd[3786]: Accepted publickey for core from 10.0.0.1 port 55430 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 23:20:53.437948 sshd-session[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:20:53.442012 systemd-logind[1556]: New session 13 of user core. Dec 13 23:20:53.452110 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 23:20:53.569795 sshd[3790]: Connection closed by 10.0.0.1 port 55430 Dec 13 23:20:53.569595 sshd-session[3786]: pam_unix(sshd:session): session closed for user core Dec 13 23:20:53.578340 systemd[1]: sshd@11-10.0.0.118:22-10.0.0.1:55430.service: Deactivated successfully. Dec 13 23:20:53.580745 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 23:20:53.582027 systemd-logind[1556]: Session 13 logged out. Waiting for processes to exit. Dec 13 23:20:53.584550 systemd[1]: Started sshd@12-10.0.0.118:22-10.0.0.1:55434.service - OpenSSH per-connection server daemon (10.0.0.1:55434). Dec 13 23:20:53.585657 systemd-logind[1556]: Removed session 13. Dec 13 23:20:53.637213 sshd[3802]: Accepted publickey for core from 10.0.0.1 port 55434 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 23:20:53.638698 sshd-session[3802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:20:53.642923 systemd-logind[1556]: New session 14 of user core. Dec 13 23:20:53.649133 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 23:20:54.223864 sshd[3806]: Connection closed by 10.0.0.1 port 55434 Dec 13 23:20:54.224145 sshd-session[3802]: pam_unix(sshd:session): session closed for user core Dec 13 23:20:54.233933 systemd[1]: sshd@12-10.0.0.118:22-10.0.0.1:55434.service: Deactivated successfully. Dec 13 23:20:54.235665 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 23:20:54.237289 systemd-logind[1556]: Session 14 logged out. Waiting for processes to exit. Dec 13 23:20:54.243465 systemd[1]: Started sshd@13-10.0.0.118:22-10.0.0.1:55446.service - OpenSSH per-connection server daemon (10.0.0.1:55446). Dec 13 23:20:54.245786 systemd-logind[1556]: Removed session 14. Dec 13 23:20:54.306399 sshd[3825]: Accepted publickey for core from 10.0.0.1 port 55446 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 23:20:54.308083 sshd-session[3825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:20:54.312610 systemd-logind[1556]: New session 15 of user core. Dec 13 23:20:54.320147 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 23:20:54.498097 sshd[3829]: Connection closed by 10.0.0.1 port 55446 Dec 13 23:20:54.498732 sshd-session[3825]: pam_unix(sshd:session): session closed for user core Dec 13 23:20:54.510148 systemd[1]: sshd@13-10.0.0.118:22-10.0.0.1:55446.service: Deactivated successfully. Dec 13 23:20:54.513188 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 23:20:54.515376 systemd-logind[1556]: Session 15 logged out. Waiting for processes to exit. Dec 13 23:20:54.517538 systemd-logind[1556]: Removed session 15. Dec 13 23:20:54.519663 systemd[1]: Started sshd@14-10.0.0.118:22-10.0.0.1:55462.service - OpenSSH per-connection server daemon (10.0.0.1:55462). Dec 13 23:20:54.571569 sshd[3840]: Accepted publickey for core from 10.0.0.1 port 55462 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 23:20:54.573194 sshd-session[3840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:20:54.578370 systemd-logind[1556]: New session 16 of user core. Dec 13 23:20:54.588173 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 23:20:54.659703 sshd[3844]: Connection closed by 10.0.0.1 port 55462 Dec 13 23:20:54.660143 sshd-session[3840]: pam_unix(sshd:session): session closed for user core Dec 13 23:20:54.663990 systemd[1]: sshd@14-10.0.0.118:22-10.0.0.1:55462.service: Deactivated successfully. Dec 13 23:20:54.665766 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 23:20:54.666519 systemd-logind[1556]: Session 16 logged out. Waiting for processes to exit. Dec 13 23:20:54.667643 systemd-logind[1556]: Removed session 16. Dec 13 23:20:59.673349 systemd[1]: Started sshd@15-10.0.0.118:22-10.0.0.1:55464.service - OpenSSH per-connection server daemon (10.0.0.1:55464). Dec 13 23:20:59.743129 sshd[3880]: Accepted publickey for core from 10.0.0.1 port 55464 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 23:20:59.745038 sshd-session[3880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:20:59.748753 systemd-logind[1556]: New session 17 of user core. Dec 13 23:20:59.755133 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 23:20:59.831978 sshd[3884]: Connection closed by 10.0.0.1 port 55464 Dec 13 23:20:59.832119 sshd-session[3880]: pam_unix(sshd:session): session closed for user core Dec 13 23:20:59.836204 systemd[1]: sshd@15-10.0.0.118:22-10.0.0.1:55464.service: Deactivated successfully. Dec 13 23:20:59.838418 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 23:20:59.839182 systemd-logind[1556]: Session 17 logged out. Waiting for processes to exit. Dec 13 23:20:59.840998 systemd-logind[1556]: Removed session 17. Dec 13 23:21:04.847223 systemd[1]: Started sshd@16-10.0.0.118:22-10.0.0.1:42936.service - OpenSSH per-connection server daemon (10.0.0.1:42936). Dec 13 23:21:04.893325 sshd[3918]: Accepted publickey for core from 10.0.0.1 port 42936 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 23:21:04.894861 sshd-session[3918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:21:04.899124 systemd-logind[1556]: New session 18 of user core. Dec 13 23:21:04.911219 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 23:21:04.982841 sshd[3922]: Connection closed by 10.0.0.1 port 42936 Dec 13 23:21:04.983180 sshd-session[3918]: pam_unix(sshd:session): session closed for user core Dec 13 23:21:04.987099 systemd[1]: sshd@16-10.0.0.118:22-10.0.0.1:42936.service: Deactivated successfully. Dec 13 23:21:04.989900 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 23:21:04.990816 systemd-logind[1556]: Session 18 logged out. Waiting for processes to exit. Dec 13 23:21:04.991995 systemd-logind[1556]: Removed session 18. Dec 13 23:21:09.995816 systemd[1]: Started sshd@17-10.0.0.118:22-10.0.0.1:42940.service - OpenSSH per-connection server daemon (10.0.0.1:42940). Dec 13 23:21:10.042731 sshd[3957]: Accepted publickey for core from 10.0.0.1 port 42940 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 23:21:10.044173 sshd-session[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 23:21:10.048393 systemd-logind[1556]: New session 19 of user core. Dec 13 23:21:10.058214 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 23:21:10.127516 sshd[3961]: Connection closed by 10.0.0.1 port 42940 Dec 13 23:21:10.127351 sshd-session[3957]: pam_unix(sshd:session): session closed for user core Dec 13 23:21:10.130946 systemd[1]: sshd@17-10.0.0.118:22-10.0.0.1:42940.service: Deactivated successfully. Dec 13 23:21:10.132841 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 23:21:10.134242 systemd-logind[1556]: Session 19 logged out. Waiting for processes to exit. Dec 13 23:21:10.135564 systemd-logind[1556]: Removed session 19.