Dec 16 12:31:29.784602 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 16 12:31:29.784623 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 16 12:31:29.784633 kernel: KASLR enabled Dec 16 12:31:29.784639 kernel: efi: EFI v2.7 by EDK II Dec 16 12:31:29.784644 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Dec 16 12:31:29.784649 kernel: random: crng init done Dec 16 12:31:29.784656 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 16 12:31:29.784662 kernel: secureboot: Secure boot enabled Dec 16 12:31:29.784667 kernel: ACPI: Early table checksum verification disabled Dec 16 12:31:29.784674 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Dec 16 12:31:29.784680 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 16 12:31:29.784686 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:31:29.784691 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:31:29.784697 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:31:29.784704 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:31:29.784711 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:31:29.784717 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:31:29.784723 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:31:29.784730 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:31:29.784735 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:31:29.784741 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 16 12:31:29.784747 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:31:29.784754 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:31:29.784760 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Dec 16 12:31:29.784766 kernel: Zone ranges: Dec 16 12:31:29.784773 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:31:29.784779 kernel: DMA32 empty Dec 16 12:31:29.784785 kernel: Normal empty Dec 16 12:31:29.784797 kernel: Device empty Dec 16 12:31:29.784805 kernel: Movable zone start for each node Dec 16 12:31:29.784811 kernel: Early memory node ranges Dec 16 12:31:29.784819 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Dec 16 12:31:29.784826 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Dec 16 12:31:29.784832 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Dec 16 12:31:29.784838 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Dec 16 12:31:29.784844 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Dec 16 12:31:29.784850 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Dec 16 12:31:29.784857 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Dec 16 12:31:29.784864 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Dec 16 12:31:29.784870 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 16 12:31:29.784879 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:31:29.784886 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 16 12:31:29.784892 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Dec 16 12:31:29.784899 kernel: psci: probing for conduit method from ACPI. Dec 16 12:31:29.784907 kernel: psci: PSCIv1.1 detected in firmware. Dec 16 12:31:29.784913 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:31:29.784920 kernel: psci: Trusted OS migration not required Dec 16 12:31:29.784926 kernel: psci: SMC Calling Convention v1.1 Dec 16 12:31:29.784933 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 16 12:31:29.784939 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:31:29.784945 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:31:29.784952 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 16 12:31:29.784958 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:31:29.784965 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:31:29.784972 kernel: CPU features: detected: Spectre-v4 Dec 16 12:31:29.784978 kernel: CPU features: detected: Spectre-BHB Dec 16 12:31:29.784984 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:31:29.784991 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:31:29.784997 kernel: CPU features: detected: ARM erratum 1418040 Dec 16 12:31:29.785003 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:31:29.785010 kernel: alternatives: applying boot alternatives Dec 16 12:31:29.785017 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:31:29.785023 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:31:29.785030 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:31:29.785038 kernel: Fallback order for Node 0: 0 Dec 16 12:31:29.785044 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 16 12:31:29.785050 kernel: Policy zone: DMA Dec 16 12:31:29.785056 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:31:29.785063 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 16 12:31:29.785069 kernel: software IO TLB: area num 4. Dec 16 12:31:29.785075 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 16 12:31:29.785081 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Dec 16 12:31:29.785088 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 12:31:29.785094 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:31:29.785101 kernel: rcu: RCU event tracing is enabled. Dec 16 12:31:29.785107 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 12:31:29.785115 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:31:29.785121 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:31:29.785128 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:31:29.785134 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 12:31:29.785140 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:31:29.785147 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:31:29.785153 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:31:29.785159 kernel: GICv3: 256 SPIs implemented Dec 16 12:31:29.785165 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:31:29.785172 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:31:29.785178 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 16 12:31:29.785184 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 12:31:29.785192 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 16 12:31:29.785198 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 16 12:31:29.785205 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 16 12:31:29.785211 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 16 12:31:29.785218 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 16 12:31:29.785224 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 16 12:31:29.785230 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:31:29.785237 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:31:29.785295 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 16 12:31:29.785304 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 16 12:31:29.785311 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 16 12:31:29.785320 kernel: arm-pv: using stolen time PV Dec 16 12:31:29.785327 kernel: Console: colour dummy device 80x25 Dec 16 12:31:29.785333 kernel: ACPI: Core revision 20240827 Dec 16 12:31:29.785347 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 16 12:31:29.785354 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:31:29.785360 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:31:29.785367 kernel: landlock: Up and running. Dec 16 12:31:29.785373 kernel: SELinux: Initializing. Dec 16 12:31:29.785380 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:31:29.785388 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:31:29.785395 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:31:29.785402 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:31:29.785408 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:31:29.785415 kernel: Remapping and enabling EFI services. Dec 16 12:31:29.785421 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:31:29.785428 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:31:29.785434 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 16 12:31:29.785441 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 16 12:31:29.785449 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:31:29.785460 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 16 12:31:29.785467 kernel: Detected PIPT I-cache on CPU2 Dec 16 12:31:29.785476 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 16 12:31:29.785483 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 16 12:31:29.785490 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:31:29.785496 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 16 12:31:29.785503 kernel: Detected PIPT I-cache on CPU3 Dec 16 12:31:29.785512 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 16 12:31:29.785519 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 16 12:31:29.785526 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:31:29.785533 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 16 12:31:29.785539 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 12:31:29.785546 kernel: SMP: Total of 4 processors activated. Dec 16 12:31:29.785554 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:31:29.785561 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:31:29.785568 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:31:29.785576 kernel: CPU features: detected: Common not Private translations Dec 16 12:31:29.785585 kernel: CPU features: detected: CRC32 instructions Dec 16 12:31:29.785592 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 16 12:31:29.785599 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:31:29.785606 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:31:29.785613 kernel: CPU features: detected: Privileged Access Never Dec 16 12:31:29.785620 kernel: CPU features: detected: RAS Extension Support Dec 16 12:31:29.785627 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:31:29.785634 kernel: alternatives: applying system-wide alternatives Dec 16 12:31:29.785643 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 16 12:31:29.785652 kernel: Memory: 2421668K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 128284K reserved, 16384K cma-reserved) Dec 16 12:31:29.785659 kernel: devtmpfs: initialized Dec 16 12:31:29.785667 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:31:29.785674 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 12:31:29.785681 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:31:29.785687 kernel: 0 pages in range for non-PLT usage Dec 16 12:31:29.785694 kernel: 508400 pages in range for PLT usage Dec 16 12:31:29.785701 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:31:29.785708 kernel: SMBIOS 3.0.0 present. Dec 16 12:31:29.785716 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 16 12:31:29.785723 kernel: DMI: Memory slots populated: 1/1 Dec 16 12:31:29.785730 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:31:29.785737 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:31:29.785744 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:31:29.785751 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:31:29.785757 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:31:29.785764 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Dec 16 12:31:29.785771 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:31:29.785779 kernel: cpuidle: using governor menu Dec 16 12:31:29.785786 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:31:29.785793 kernel: ASID allocator initialised with 32768 entries Dec 16 12:31:29.785799 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:31:29.785806 kernel: Serial: AMBA PL011 UART driver Dec 16 12:31:29.785813 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:31:29.785820 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:31:29.785827 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:31:29.785833 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:31:29.785842 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:31:29.785849 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:31:29.785858 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:31:29.785868 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:31:29.785876 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:31:29.785883 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:31:29.785890 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:31:29.785897 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:31:29.785903 kernel: ACPI: Interpreter enabled Dec 16 12:31:29.785912 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:31:29.785919 kernel: ACPI: MCFG table detected, 1 entries Dec 16 12:31:29.785926 kernel: ACPI: CPU0 has been hot-added Dec 16 12:31:29.785934 kernel: ACPI: CPU1 has been hot-added Dec 16 12:31:29.785941 kernel: ACPI: CPU2 has been hot-added Dec 16 12:31:29.785947 kernel: ACPI: CPU3 has been hot-added Dec 16 12:31:29.785955 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:31:29.785962 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:31:29.785969 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 12:31:29.786106 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:31:29.786175 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 12:31:29.786238 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 12:31:29.786404 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 16 12:31:29.786465 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 16 12:31:29.786474 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 16 12:31:29.786482 kernel: PCI host bridge to bus 0000:00 Dec 16 12:31:29.786551 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 16 12:31:29.786609 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 12:31:29.786663 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 16 12:31:29.786713 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 12:31:29.786806 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:31:29.786875 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 12:31:29.786940 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 16 12:31:29.787001 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 16 12:31:29.787060 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 16 12:31:29.787119 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 16 12:31:29.787177 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 16 12:31:29.787236 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 16 12:31:29.787316 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 16 12:31:29.787384 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 12:31:29.787439 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 16 12:31:29.787449 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 12:31:29.787456 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 12:31:29.787463 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 12:31:29.787470 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 12:31:29.787477 kernel: iommu: Default domain type: Translated Dec 16 12:31:29.787484 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:31:29.787491 kernel: efivars: Registered efivars operations Dec 16 12:31:29.787500 kernel: vgaarb: loaded Dec 16 12:31:29.787507 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:31:29.787515 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:31:29.787522 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:31:29.787530 kernel: pnp: PnP ACPI init Dec 16 12:31:29.787605 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 16 12:31:29.787615 kernel: pnp: PnP ACPI: found 1 devices Dec 16 12:31:29.787622 kernel: NET: Registered PF_INET protocol family Dec 16 12:31:29.787631 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:31:29.787639 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:31:29.787646 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:31:29.787654 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:31:29.787661 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:31:29.787667 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:31:29.787675 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:31:29.787682 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:31:29.787689 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:31:29.787697 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:31:29.787704 kernel: kvm [1]: HYP mode not available Dec 16 12:31:29.787711 kernel: Initialise system trusted keyrings Dec 16 12:31:29.787718 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:31:29.787725 kernel: Key type asymmetric registered Dec 16 12:31:29.787732 kernel: Asymmetric key parser 'x509' registered Dec 16 12:31:29.787739 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:31:29.787746 kernel: io scheduler mq-deadline registered Dec 16 12:31:29.787753 kernel: io scheduler kyber registered Dec 16 12:31:29.787761 kernel: io scheduler bfq registered Dec 16 12:31:29.787768 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 12:31:29.787775 kernel: ACPI: button: Power Button [PWRB] Dec 16 12:31:29.787783 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 12:31:29.787842 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 16 12:31:29.787852 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:31:29.787858 kernel: thunder_xcv, ver 1.0 Dec 16 12:31:29.787865 kernel: thunder_bgx, ver 1.0 Dec 16 12:31:29.787872 kernel: nicpf, ver 1.0 Dec 16 12:31:29.787880 kernel: nicvf, ver 1.0 Dec 16 12:31:29.787945 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:31:29.788003 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:31:29 UTC (1765888289) Dec 16 12:31:29.788012 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:31:29.788019 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:31:29.788027 kernel: watchdog: NMI not fully supported Dec 16 12:31:29.788034 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:31:29.788041 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:31:29.788050 kernel: Segment Routing with IPv6 Dec 16 12:31:29.788057 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:31:29.788064 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:31:29.788071 kernel: Key type dns_resolver registered Dec 16 12:31:29.788077 kernel: registered taskstats version 1 Dec 16 12:31:29.788084 kernel: Loading compiled-in X.509 certificates Dec 16 12:31:29.788091 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 16 12:31:29.788098 kernel: Demotion targets for Node 0: null Dec 16 12:31:29.788105 kernel: Key type .fscrypt registered Dec 16 12:31:29.788113 kernel: Key type fscrypt-provisioning registered Dec 16 12:31:29.788120 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:31:29.788127 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:31:29.788134 kernel: ima: No architecture policies found Dec 16 12:31:29.788142 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:31:29.788149 kernel: clk: Disabling unused clocks Dec 16 12:31:29.788156 kernel: PM: genpd: Disabling unused power domains Dec 16 12:31:29.788163 kernel: Warning: unable to open an initial console. Dec 16 12:31:29.788170 kernel: Freeing unused kernel memory: 39552K Dec 16 12:31:29.788178 kernel: Run /init as init process Dec 16 12:31:29.788185 kernel: with arguments: Dec 16 12:31:29.788193 kernel: /init Dec 16 12:31:29.788199 kernel: with environment: Dec 16 12:31:29.788206 kernel: HOME=/ Dec 16 12:31:29.788213 kernel: TERM=linux Dec 16 12:31:29.788221 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:31:29.788231 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:31:29.788241 systemd[1]: Detected virtualization kvm. Dec 16 12:31:29.788259 systemd[1]: Detected architecture arm64. Dec 16 12:31:29.788280 systemd[1]: Running in initrd. Dec 16 12:31:29.788288 systemd[1]: No hostname configured, using default hostname. Dec 16 12:31:29.788295 systemd[1]: Hostname set to . Dec 16 12:31:29.788303 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:31:29.788310 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:31:29.788318 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:31:29.788327 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:31:29.788335 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:31:29.788349 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:31:29.788357 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:31:29.788365 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:31:29.788374 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 12:31:29.788383 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 12:31:29.788391 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:31:29.788398 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:31:29.788406 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:31:29.788413 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:31:29.788420 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:31:29.788428 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:31:29.788435 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:31:29.788443 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:31:29.788451 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:31:29.788459 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:31:29.788466 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:31:29.788474 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:31:29.788481 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:31:29.788489 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:31:29.788496 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:31:29.788504 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:31:29.788513 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:31:29.788521 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:31:29.788528 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:31:29.788536 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:31:29.788543 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:31:29.788551 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:31:29.788558 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:31:29.788568 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:31:29.788576 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:31:29.788603 systemd-journald[245]: Collecting audit messages is disabled. Dec 16 12:31:29.788623 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:31:29.788632 systemd-journald[245]: Journal started Dec 16 12:31:29.788650 systemd-journald[245]: Runtime Journal (/run/log/journal/af66950e664442b39152028518af1687) is 6M, max 48.5M, 42.4M free. Dec 16 12:31:29.795319 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:31:29.782145 systemd-modules-load[247]: Inserted module 'overlay' Dec 16 12:31:29.798186 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:31:29.799821 systemd-modules-load[247]: Inserted module 'br_netfilter' Dec 16 12:31:29.802130 kernel: Bridge firewalling registered Dec 16 12:31:29.802151 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:31:29.803469 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:31:29.804776 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:31:29.809332 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:31:29.811113 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:31:29.813121 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:31:29.818902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:31:29.826449 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:31:29.829650 systemd-tmpfiles[274]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:31:29.829832 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:31:29.833192 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:31:29.835999 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:31:29.838301 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:31:29.840316 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:31:29.866302 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:31:29.880831 systemd-resolved[288]: Positive Trust Anchors: Dec 16 12:31:29.880849 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:31:29.880881 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:31:29.885868 systemd-resolved[288]: Defaulting to hostname 'linux'. Dec 16 12:31:29.887139 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:31:29.890402 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:31:29.946275 kernel: SCSI subsystem initialized Dec 16 12:31:29.950260 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:31:29.958274 kernel: iscsi: registered transport (tcp) Dec 16 12:31:29.971487 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:31:29.971514 kernel: QLogic iSCSI HBA Driver Dec 16 12:31:29.989354 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:31:30.007327 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:31:30.010528 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:31:30.056060 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:31:30.058004 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:31:30.126290 kernel: raid6: neonx8 gen() 15770 MB/s Dec 16 12:31:30.143269 kernel: raid6: neonx4 gen() 15771 MB/s Dec 16 12:31:30.160266 kernel: raid6: neonx2 gen() 13003 MB/s Dec 16 12:31:30.177266 kernel: raid6: neonx1 gen() 10439 MB/s Dec 16 12:31:30.194265 kernel: raid6: int64x8 gen() 6890 MB/s Dec 16 12:31:30.211265 kernel: raid6: int64x4 gen() 7343 MB/s Dec 16 12:31:30.228266 kernel: raid6: int64x2 gen() 6042 MB/s Dec 16 12:31:30.245392 kernel: raid6: int64x1 gen() 5046 MB/s Dec 16 12:31:30.245409 kernel: raid6: using algorithm neonx4 gen() 15771 MB/s Dec 16 12:31:30.263341 kernel: raid6: .... xor() 12351 MB/s, rmw enabled Dec 16 12:31:30.263356 kernel: raid6: using neon recovery algorithm Dec 16 12:31:30.268267 kernel: xor: measuring software checksum speed Dec 16 12:31:30.268285 kernel: 8regs : 19617 MB/sec Dec 16 12:31:30.269406 kernel: 32regs : 21676 MB/sec Dec 16 12:31:30.270627 kernel: arm64_neon : 28041 MB/sec Dec 16 12:31:30.270644 kernel: xor: using function: arm64_neon (28041 MB/sec) Dec 16 12:31:30.322280 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:31:30.329110 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:31:30.331790 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:31:30.367452 systemd-udevd[503]: Using default interface naming scheme 'v255'. Dec 16 12:31:30.371559 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:31:30.374052 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:31:30.397742 dracut-pre-trigger[511]: rd.md=0: removing MD RAID activation Dec 16 12:31:30.422584 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:31:30.425814 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:31:30.491257 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:31:30.493572 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:31:30.547980 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 16 12:31:30.548170 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 16 12:31:30.555444 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:31:30.555497 kernel: GPT:9289727 != 19775487 Dec 16 12:31:30.555515 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:31:30.556541 kernel: GPT:9289727 != 19775487 Dec 16 12:31:30.556561 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:31:30.557671 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:31:30.557888 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:31:30.558016 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:31:30.561499 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:31:30.571761 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:31:30.600075 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 12:31:30.601719 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:31:30.603802 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:31:30.618496 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 12:31:30.630957 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:31:30.637299 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 16 12:31:30.638454 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 12:31:30.641452 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:31:30.643739 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:31:30.645817 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:31:30.648643 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:31:30.652125 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:31:30.669873 disk-uuid[595]: Primary Header is updated. Dec 16 12:31:30.669873 disk-uuid[595]: Secondary Entries is updated. Dec 16 12:31:30.669873 disk-uuid[595]: Secondary Header is updated. Dec 16 12:31:30.674295 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:31:30.674516 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:31:31.684271 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:31:31.684663 disk-uuid[600]: The operation has completed successfully. Dec 16 12:31:31.712518 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:31:31.712620 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:31:31.738230 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 12:31:31.760935 sh[615]: Success Dec 16 12:31:31.773739 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:31:31.773795 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:31:31.774972 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:31:31.782287 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:31:31.812858 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:31:31.814780 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 12:31:31.831895 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 12:31:31.841280 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (627) Dec 16 12:31:31.844330 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 16 12:31:31.844362 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:31:31.850755 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:31:31.850948 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:31:31.851824 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 12:31:31.853064 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:31:31.854363 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:31:31.855143 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:31:31.857726 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:31:31.882299 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (660) Dec 16 12:31:31.884749 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:31:31.884793 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:31:31.887720 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:31:31.887763 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:31:31.892326 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:31:31.893871 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:31:31.895982 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:31:31.972144 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:31:31.975198 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:31:32.000917 ignition[705]: Ignition 2.22.0 Dec 16 12:31:32.000930 ignition[705]: Stage: fetch-offline Dec 16 12:31:32.001119 ignition[705]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:31:32.001128 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:31:32.001207 ignition[705]: parsed url from cmdline: "" Dec 16 12:31:32.001209 ignition[705]: no config URL provided Dec 16 12:31:32.001214 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:31:32.001220 ignition[705]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:31:32.001241 ignition[705]: op(1): [started] loading QEMU firmware config module Dec 16 12:31:32.001264 ignition[705]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 12:31:32.010117 ignition[705]: op(1): [finished] loading QEMU firmware config module Dec 16 12:31:32.010668 systemd-networkd[807]: lo: Link UP Dec 16 12:31:32.010672 systemd-networkd[807]: lo: Gained carrier Dec 16 12:31:32.011328 systemd-networkd[807]: Enumeration completed Dec 16 12:31:32.011427 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:31:32.011704 systemd-networkd[807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:31:32.011708 systemd-networkd[807]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:31:32.012475 systemd-networkd[807]: eth0: Link UP Dec 16 12:31:32.012604 systemd-networkd[807]: eth0: Gained carrier Dec 16 12:31:32.012614 systemd-networkd[807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:31:32.013385 systemd[1]: Reached target network.target - Network. Dec 16 12:31:32.031377 systemd-networkd[807]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:31:32.045470 ignition[705]: parsing config with SHA512: 83f7899a82ad2ea650016194e30da6aa69c3c9cdf9f90cefd8e71bd664cfeb1c5d6b19f5942b8b1531d8ac5b5e173af8d58f49f78132f7086dee943d35ad191d Dec 16 12:31:32.049997 unknown[705]: fetched base config from "system" Dec 16 12:31:32.050009 unknown[705]: fetched user config from "qemu" Dec 16 12:31:32.050395 ignition[705]: fetch-offline: fetch-offline passed Dec 16 12:31:32.050451 ignition[705]: Ignition finished successfully Dec 16 12:31:32.053827 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:31:32.055476 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 12:31:32.056225 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:31:32.102700 ignition[815]: Ignition 2.22.0 Dec 16 12:31:32.102715 ignition[815]: Stage: kargs Dec 16 12:31:32.104027 ignition[815]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:31:32.104043 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:31:32.105689 ignition[815]: kargs: kargs passed Dec 16 12:31:32.105736 ignition[815]: Ignition finished successfully Dec 16 12:31:32.110644 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:31:32.112631 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:31:32.141687 ignition[823]: Ignition 2.22.0 Dec 16 12:31:32.141705 ignition[823]: Stage: disks Dec 16 12:31:32.141832 ignition[823]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:31:32.144982 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:31:32.141840 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:31:32.146277 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:31:32.142587 ignition[823]: disks: disks passed Dec 16 12:31:32.147947 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:31:32.142632 ignition[823]: Ignition finished successfully Dec 16 12:31:32.150057 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:31:32.151832 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:31:32.153230 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:31:32.156055 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:31:32.186373 systemd-fsck[833]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 12:31:32.191060 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:31:32.193614 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:31:32.257269 kernel: EXT4-fs (vda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 16 12:31:32.257572 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:31:32.258832 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:31:32.261181 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:31:32.262927 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:31:32.263940 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:31:32.263979 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:31:32.264017 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:31:32.272675 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:31:32.275153 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:31:32.278303 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (841) Dec 16 12:31:32.280420 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:31:32.280447 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:31:32.285269 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:31:32.285294 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:31:32.287950 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:31:32.313900 initrd-setup-root[865]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:31:32.318122 initrd-setup-root[872]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:31:32.322278 initrd-setup-root[879]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:31:32.326073 initrd-setup-root[886]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:31:32.401935 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:31:32.403841 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:31:32.406199 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:31:32.421457 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:31:32.433904 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:31:32.441233 ignition[953]: INFO : Ignition 2.22.0 Dec 16 12:31:32.441233 ignition[953]: INFO : Stage: mount Dec 16 12:31:32.443386 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:31:32.443386 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:31:32.443386 ignition[953]: INFO : mount: mount passed Dec 16 12:31:32.443386 ignition[953]: INFO : Ignition finished successfully Dec 16 12:31:32.447177 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:31:32.449032 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:31:32.840229 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:31:32.841654 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:31:32.860263 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (968) Dec 16 12:31:32.862359 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:31:32.862372 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:31:32.864273 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:31:32.864303 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:31:32.866149 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:31:32.897169 ignition[985]: INFO : Ignition 2.22.0 Dec 16 12:31:32.897169 ignition[985]: INFO : Stage: files Dec 16 12:31:32.898854 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:31:32.898854 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:31:32.898854 ignition[985]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:31:32.902576 ignition[985]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:31:32.902576 ignition[985]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:31:32.902576 ignition[985]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:31:32.902576 ignition[985]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:31:32.902576 ignition[985]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:31:32.902576 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:31:32.902576 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 12:31:32.900818 unknown[985]: wrote ssh authorized keys file for user: core Dec 16 12:31:32.943453 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:31:33.059819 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:31:33.059819 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:31:33.065737 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:31:33.065737 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:31:33.065737 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:31:33.065737 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:31:33.065737 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:31:33.065737 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:31:33.065737 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:31:33.065737 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:31:33.065737 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:31:33.065737 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:31:33.065737 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:31:33.065737 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:31:33.065737 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Dec 16 12:31:33.320389 systemd-networkd[807]: eth0: Gained IPv6LL Dec 16 12:31:33.349721 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:31:33.547145 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:31:33.547145 ignition[985]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:31:33.550990 ignition[985]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:31:33.550990 ignition[985]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:31:33.550990 ignition[985]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:31:33.550990 ignition[985]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 12:31:33.550990 ignition[985]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:31:33.550990 ignition[985]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:31:33.550990 ignition[985]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 12:31:33.550990 ignition[985]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 12:31:33.566252 ignition[985]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:31:33.569722 ignition[985]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:31:33.572664 ignition[985]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 12:31:33.572664 ignition[985]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:31:33.572664 ignition[985]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:31:33.572664 ignition[985]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:31:33.572664 ignition[985]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:31:33.572664 ignition[985]: INFO : files: files passed Dec 16 12:31:33.572664 ignition[985]: INFO : Ignition finished successfully Dec 16 12:31:33.574727 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:31:33.576448 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:31:33.578488 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:31:33.587623 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:31:33.590045 initrd-setup-root-after-ignition[1014]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 12:31:33.587735 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:31:33.592343 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:31:33.592343 initrd-setup-root-after-ignition[1016]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:31:33.597998 initrd-setup-root-after-ignition[1020]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:31:33.593505 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:31:33.595589 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:31:33.597396 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:31:33.640308 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:31:33.641294 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:31:33.643728 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:31:33.645532 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:31:33.646642 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:31:33.647410 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:31:33.669952 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:31:33.674421 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:31:33.697699 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:31:33.698930 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:31:33.700926 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:31:33.702613 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:31:33.702734 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:31:33.705113 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:31:33.706238 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:31:33.708140 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:31:33.709895 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:31:33.711531 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:31:33.713341 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:31:33.715375 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:31:33.717231 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:31:33.719373 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:31:33.721186 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:31:33.723317 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:31:33.724979 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:31:33.725102 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:31:33.727529 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:31:33.729398 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:31:33.731279 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:31:33.732395 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:31:33.734437 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:31:33.734711 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:31:33.738277 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:31:33.738497 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:31:33.740374 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:31:33.741962 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:31:33.745385 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:31:33.747275 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:31:33.749503 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:31:33.751024 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:31:33.751108 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:31:33.752601 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:31:33.752676 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:31:33.754165 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:31:33.754299 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:31:33.755998 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:31:33.756095 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:31:33.758347 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:31:33.760970 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:31:33.762055 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:31:33.762168 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:31:33.764075 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:31:33.764172 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:31:33.769187 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:31:33.772407 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:31:33.780742 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:31:33.789497 ignition[1040]: INFO : Ignition 2.22.0 Dec 16 12:31:33.789497 ignition[1040]: INFO : Stage: umount Dec 16 12:31:33.791088 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:31:33.791088 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:31:33.791088 ignition[1040]: INFO : umount: umount passed Dec 16 12:31:33.791088 ignition[1040]: INFO : Ignition finished successfully Dec 16 12:31:33.793979 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:31:33.794076 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:31:33.795412 systemd[1]: Stopped target network.target - Network. Dec 16 12:31:33.796769 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:31:33.796835 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:31:33.798564 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:31:33.798607 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:31:33.800116 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:31:33.800162 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:31:33.801814 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:31:33.801855 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:31:33.803471 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:31:33.805226 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:31:33.809755 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:31:33.809873 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:31:33.812856 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 12:31:33.813103 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:31:33.813141 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:31:33.817936 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:31:33.820780 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:31:33.820879 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:31:33.824309 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:31:33.826575 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:31:33.826611 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:31:33.829994 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:31:33.833944 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:31:33.834017 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:31:33.837791 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:31:33.837841 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:31:33.841146 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:31:33.841193 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:31:33.843123 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:31:33.846189 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 12:31:33.846240 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 12:31:33.846535 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:31:33.846615 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:31:33.848784 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:31:33.848861 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:31:33.864940 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:31:33.865083 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:31:33.867474 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:31:33.867572 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:31:33.869900 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:31:33.869964 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:31:33.872283 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:31:33.872314 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:31:33.874030 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:31:33.874074 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:31:33.877061 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:31:33.877109 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:31:33.879774 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:31:33.879825 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:31:33.883153 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:31:33.885112 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:31:33.885169 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:31:33.888225 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:31:33.888315 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:31:33.891260 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 12:31:33.891304 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:31:33.894062 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:31:33.894108 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:31:33.896362 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:31:33.896411 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:31:33.900834 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 12:31:33.900882 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 16 12:31:33.900909 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 12:31:33.900936 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:31:33.901451 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:31:33.901533 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:31:33.903347 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:31:33.905629 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:31:33.920049 systemd[1]: Switching root. Dec 16 12:31:33.973784 systemd-journald[245]: Journal stopped Dec 16 12:31:34.805883 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Dec 16 12:31:34.805934 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:31:34.805950 kernel: SELinux: policy capability open_perms=1 Dec 16 12:31:34.805966 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:31:34.805976 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:31:34.805989 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:31:34.805998 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:31:34.806009 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:31:34.806019 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:31:34.806028 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:31:34.806038 kernel: audit: type=1403 audit(1765888294.154:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 12:31:34.806052 systemd[1]: Successfully loaded SELinux policy in 65.536ms. Dec 16 12:31:34.806069 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.305ms. Dec 16 12:31:34.806082 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:31:34.806093 systemd[1]: Detected virtualization kvm. Dec 16 12:31:34.806103 systemd[1]: Detected architecture arm64. Dec 16 12:31:34.806113 systemd[1]: Detected first boot. Dec 16 12:31:34.806123 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:31:34.806133 zram_generator::config[1087]: No configuration found. Dec 16 12:31:34.806143 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:31:34.806153 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:31:34.806165 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 12:31:34.806176 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:31:34.806187 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:31:34.806198 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:31:34.806208 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:31:34.806218 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:31:34.806228 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:31:34.806238 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:31:34.806275 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:31:34.806289 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:31:34.806299 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:31:34.806309 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:31:34.806327 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:31:34.806339 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:31:34.806350 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:31:34.806360 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:31:34.806370 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:31:34.806382 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:31:34.806393 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:31:34.806403 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:31:34.806414 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:31:34.806424 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:31:34.806435 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:31:34.806445 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:31:34.806455 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:31:34.806467 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:31:34.806477 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:31:34.806488 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:31:34.806499 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:31:34.806510 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:31:34.806520 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:31:34.806530 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:31:34.806540 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:31:34.806551 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:31:34.806562 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:31:34.806573 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:31:34.806585 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:31:34.806596 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:31:34.806605 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:31:34.806615 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:31:34.806626 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:31:34.806636 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:31:34.806647 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:31:34.806658 systemd[1]: Reached target machines.target - Containers. Dec 16 12:31:34.806668 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:31:34.806679 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:31:34.806689 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:31:34.806699 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:31:34.806710 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:31:34.806720 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:31:34.806730 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:31:34.806741 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:31:34.806751 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:31:34.806761 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:31:34.806771 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:31:34.806781 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:31:34.806791 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:31:34.806802 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:31:34.806812 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:31:34.806823 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:31:34.806834 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:31:34.806844 kernel: loop: module loaded Dec 16 12:31:34.806854 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:31:34.806864 kernel: ACPI: bus type drm_connector registered Dec 16 12:31:34.806873 kernel: fuse: init (API version 7.41) Dec 16 12:31:34.806883 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:31:34.806895 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:31:34.806905 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:31:34.806917 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 12:31:34.806928 systemd[1]: Stopped verity-setup.service. Dec 16 12:31:34.806938 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:31:34.806949 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:31:34.806959 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:31:34.806971 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:31:34.806981 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:31:34.806991 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:31:34.807023 systemd-journald[1162]: Collecting audit messages is disabled. Dec 16 12:31:34.807047 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:31:34.807059 systemd-journald[1162]: Journal started Dec 16 12:31:34.807080 systemd-journald[1162]: Runtime Journal (/run/log/journal/af66950e664442b39152028518af1687) is 6M, max 48.5M, 42.4M free. Dec 16 12:31:34.549736 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:31:34.571888 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 12:31:34.572332 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:31:34.809288 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:31:34.810039 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:31:34.811628 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:31:34.811794 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:31:34.813413 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:31:34.813609 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:31:34.815010 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:31:34.815170 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:31:34.816591 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:31:34.816766 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:31:34.818399 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:31:34.818586 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:31:34.819947 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:31:34.820117 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:31:34.823286 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:31:34.824672 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:31:34.826376 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:31:34.827959 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:31:34.840632 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:31:34.843152 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:31:34.845373 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:31:34.846518 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:31:34.846561 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:31:34.848525 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:31:34.855164 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:31:34.856424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:31:34.857564 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:31:34.859556 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:31:34.860815 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:31:34.861810 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:31:34.863064 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:31:34.864007 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:31:34.869396 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:31:34.871694 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:31:34.874210 systemd-journald[1162]: Time spent on flushing to /var/log/journal/af66950e664442b39152028518af1687 is 13.021ms for 886 entries. Dec 16 12:31:34.874210 systemd-journald[1162]: System Journal (/var/log/journal/af66950e664442b39152028518af1687) is 8M, max 195.6M, 187.6M free. Dec 16 12:31:34.893474 systemd-journald[1162]: Received client request to flush runtime journal. Dec 16 12:31:34.893520 kernel: loop0: detected capacity change from 0 to 100632 Dec 16 12:31:34.874528 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:31:34.876946 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:31:34.878490 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:31:34.880167 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:31:34.887500 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:31:34.890337 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:31:34.896280 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:31:34.910259 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:31:34.910884 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:31:34.914289 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Dec 16 12:31:34.914330 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Dec 16 12:31:34.918826 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:31:34.923409 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:31:34.936440 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:31:34.942291 kernel: loop1: detected capacity change from 0 to 200800 Dec 16 12:31:34.952749 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:31:34.957479 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:31:34.972276 kernel: loop2: detected capacity change from 0 to 119840 Dec 16 12:31:34.981653 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Dec 16 12:31:34.981669 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Dec 16 12:31:34.985036 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:31:34.995270 kernel: loop3: detected capacity change from 0 to 100632 Dec 16 12:31:35.000273 kernel: loop4: detected capacity change from 0 to 200800 Dec 16 12:31:35.007277 kernel: loop5: detected capacity change from 0 to 119840 Dec 16 12:31:35.012549 (sd-merge)[1228]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 16 12:31:35.012941 (sd-merge)[1228]: Merged extensions into '/usr'. Dec 16 12:31:35.016358 systemd[1]: Reload requested from client PID 1203 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:31:35.016374 systemd[1]: Reloading... Dec 16 12:31:35.074272 zram_generator::config[1257]: No configuration found. Dec 16 12:31:35.169989 ldconfig[1198]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:31:35.209694 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:31:35.209977 systemd[1]: Reloading finished in 193 ms. Dec 16 12:31:35.240269 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:31:35.241659 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:31:35.253571 systemd[1]: Starting ensure-sysext.service... Dec 16 12:31:35.255371 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:31:35.264122 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:31:35.264137 systemd[1]: Reloading... Dec 16 12:31:35.269960 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:31:35.269992 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:31:35.270788 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:31:35.271053 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 12:31:35.271696 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 12:31:35.271904 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Dec 16 12:31:35.271951 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Dec 16 12:31:35.276421 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:31:35.276435 systemd-tmpfiles[1289]: Skipping /boot Dec 16 12:31:35.282002 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:31:35.282018 systemd-tmpfiles[1289]: Skipping /boot Dec 16 12:31:35.312325 zram_generator::config[1319]: No configuration found. Dec 16 12:31:35.437414 systemd[1]: Reloading finished in 172 ms. Dec 16 12:31:35.457930 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:31:35.464311 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:31:35.469833 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:31:35.472204 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:31:35.474324 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:31:35.490419 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:31:35.493020 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:31:35.496479 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:31:35.503295 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:31:35.505892 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:31:35.508734 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:31:35.514786 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:31:35.517446 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:31:35.520915 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:31:35.522234 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:31:35.522374 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:31:35.523570 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:31:35.526666 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:31:35.526844 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:31:35.535479 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:31:35.539308 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:31:35.540845 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:31:35.541101 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:31:35.544263 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:31:35.547108 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:31:35.548865 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:31:35.549013 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:31:35.551077 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:31:35.553102 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:31:35.553301 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:31:35.554436 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Dec 16 12:31:35.555767 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:31:35.555945 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:31:35.565548 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:31:35.568387 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:31:35.569632 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:31:35.578233 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:31:35.581508 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:31:35.586698 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:31:35.587759 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:31:35.587875 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:31:35.587983 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:31:35.592174 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:31:35.592391 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:31:35.595255 augenrules[1400]: No rules Dec 16 12:31:35.594974 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:31:35.595329 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:31:35.597662 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:31:35.599669 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:31:35.599886 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:31:35.602161 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:31:35.602390 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:31:35.604067 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:31:35.604275 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:31:35.611276 systemd[1]: Finished ensure-sysext.service. Dec 16 12:31:35.624114 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:31:35.625557 systemd-resolved[1355]: Positive Trust Anchors: Dec 16 12:31:35.625834 systemd-resolved[1355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:31:35.625914 systemd-resolved[1355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:31:35.626427 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:31:35.626501 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:31:35.628484 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 12:31:35.632501 systemd-resolved[1355]: Defaulting to hostname 'linux'. Dec 16 12:31:35.633968 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:31:35.635156 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:31:35.659007 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:31:35.718334 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:31:35.720938 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:31:35.752749 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:31:35.780712 systemd-networkd[1437]: lo: Link UP Dec 16 12:31:35.780721 systemd-networkd[1437]: lo: Gained carrier Dec 16 12:31:35.781573 systemd-networkd[1437]: Enumeration completed Dec 16 12:31:35.781679 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:31:35.781998 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:31:35.782002 systemd-networkd[1437]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:31:35.782878 systemd[1]: Reached target network.target - Network. Dec 16 12:31:35.783441 systemd-networkd[1437]: eth0: Link UP Dec 16 12:31:35.783544 systemd-networkd[1437]: eth0: Gained carrier Dec 16 12:31:35.783565 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:31:35.785530 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:31:35.789451 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:31:35.794614 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 12:31:35.796048 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:31:35.797257 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:31:35.798413 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:31:35.799368 systemd-networkd[1437]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:31:35.799632 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:31:35.800618 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Dec 16 12:31:35.800893 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:31:35.800917 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:31:35.802118 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:31:35.802282 systemd-timesyncd[1439]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 12:31:35.802953 systemd-timesyncd[1439]: Initial clock synchronization to Tue 2025-12-16 12:31:35.933283 UTC. Dec 16 12:31:35.803383 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:31:35.804482 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:31:35.805703 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:31:35.807634 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:31:35.810061 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:31:35.812589 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:31:35.813996 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:31:35.815283 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:31:35.818226 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:31:35.819649 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:31:35.821940 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:31:35.823538 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:31:35.833163 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:31:35.834368 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:31:35.835346 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:31:35.835382 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:31:35.836529 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:31:35.838660 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:31:35.840573 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:31:35.845973 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:31:35.847969 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:31:35.849069 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:31:35.850004 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:31:35.852050 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:31:35.854994 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:31:35.855834 jq[1475]: false Dec 16 12:31:35.857429 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:31:35.861707 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:31:35.864847 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:31:35.868361 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:31:35.868846 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:31:35.869501 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:31:35.870377 extend-filesystems[1476]: Found /dev/vda6 Dec 16 12:31:35.871994 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:31:35.876502 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:31:35.878156 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:31:35.879139 extend-filesystems[1476]: Found /dev/vda9 Dec 16 12:31:35.881802 jq[1495]: true Dec 16 12:31:35.882109 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:31:35.882601 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:31:35.882757 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:31:35.885779 extend-filesystems[1476]: Checking size of /dev/vda9 Dec 16 12:31:35.885903 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:31:35.886112 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:31:35.902800 update_engine[1492]: I20251216 12:31:35.902534 1492 main.cc:92] Flatcar Update Engine starting Dec 16 12:31:35.903596 tar[1500]: linux-arm64/LICENSE Dec 16 12:31:35.903771 tar[1500]: linux-arm64/helm Dec 16 12:31:35.904620 (ntainerd)[1503]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 12:31:35.916704 jq[1502]: true Dec 16 12:31:35.921404 dbus-daemon[1473]: [system] SELinux support is enabled Dec 16 12:31:35.921591 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:31:35.925211 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:31:35.925237 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:31:35.929250 extend-filesystems[1476]: Resized partition /dev/vda9 Dec 16 12:31:35.934038 update_engine[1492]: I20251216 12:31:35.929090 1492 update_check_scheduler.cc:74] Next update check in 2m29s Dec 16 12:31:35.926989 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:31:35.934123 extend-filesystems[1519]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:31:35.927005 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:31:35.929003 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:31:35.932182 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:31:35.942584 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 12:31:35.944589 systemd-logind[1485]: New seat seat0. Dec 16 12:31:35.946448 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:31:35.954263 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 16 12:31:36.036331 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:31:36.053681 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 16 12:31:36.064183 locksmithd[1520]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:31:36.067903 extend-filesystems[1519]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 12:31:36.067903 extend-filesystems[1519]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 12:31:36.067903 extend-filesystems[1519]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 16 12:31:36.075649 extend-filesystems[1476]: Resized filesystem in /dev/vda9 Dec 16 12:31:36.070872 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:31:36.076736 bash[1536]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:31:36.071077 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:31:36.078733 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:31:36.080880 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:31:36.128035 containerd[1503]: time="2025-12-16T12:31:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:31:36.128796 containerd[1503]: time="2025-12-16T12:31:36.128611795Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 12:31:36.140120 containerd[1503]: time="2025-12-16T12:31:36.140081112Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.92µs" Dec 16 12:31:36.140120 containerd[1503]: time="2025-12-16T12:31:36.140114653Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:31:36.140213 containerd[1503]: time="2025-12-16T12:31:36.140140226Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:31:36.140332 containerd[1503]: time="2025-12-16T12:31:36.140309273Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:31:36.140332 containerd[1503]: time="2025-12-16T12:31:36.140331268Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:31:36.140387 containerd[1503]: time="2025-12-16T12:31:36.140357897Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:31:36.140425 containerd[1503]: time="2025-12-16T12:31:36.140407498Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:31:36.140453 containerd[1503]: time="2025-12-16T12:31:36.140422459Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:31:36.140683 containerd[1503]: time="2025-12-16T12:31:36.140657735Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:31:36.140683 containerd[1503]: time="2025-12-16T12:31:36.140681031Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:31:36.140735 containerd[1503]: time="2025-12-16T12:31:36.140693024Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:31:36.140735 containerd[1503]: time="2025-12-16T12:31:36.140700993Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:31:36.140785 containerd[1503]: time="2025-12-16T12:31:36.140769579Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:31:36.140977 containerd[1503]: time="2025-12-16T12:31:36.140958588Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:31:36.141003 containerd[1503]: time="2025-12-16T12:31:36.140989894Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:31:36.141003 containerd[1503]: time="2025-12-16T12:31:36.141000220Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:31:36.141046 containerd[1503]: time="2025-12-16T12:31:36.141032989Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:31:36.141383 containerd[1503]: time="2025-12-16T12:31:36.141363155Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:31:36.141465 containerd[1503]: time="2025-12-16T12:31:36.141429831Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:31:36.144841 containerd[1503]: time="2025-12-16T12:31:36.144801954Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:31:36.144904 containerd[1503]: time="2025-12-16T12:31:36.144869972Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:31:36.144904 containerd[1503]: time="2025-12-16T12:31:36.144885136Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:31:36.144904 containerd[1503]: time="2025-12-16T12:31:36.144896723Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:31:36.145056 containerd[1503]: time="2025-12-16T12:31:36.144909001Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:31:36.145056 containerd[1503]: time="2025-12-16T12:31:36.144919490Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:31:36.145056 containerd[1503]: time="2025-12-16T12:31:36.144933232Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:31:36.145056 containerd[1503]: time="2025-12-16T12:31:36.144945673Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:31:36.145056 containerd[1503]: time="2025-12-16T12:31:36.144957829Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:31:36.145056 containerd[1503]: time="2025-12-16T12:31:36.144968034Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:31:36.145056 containerd[1503]: time="2025-12-16T12:31:36.144976937Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:31:36.145056 containerd[1503]: time="2025-12-16T12:31:36.144992183Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:31:36.145267 containerd[1503]: time="2025-12-16T12:31:36.145108581Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:31:36.145267 containerd[1503]: time="2025-12-16T12:31:36.145128828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:31:36.145267 containerd[1503]: time="2025-12-16T12:31:36.145142610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:31:36.145267 containerd[1503]: time="2025-12-16T12:31:36.145155132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:31:36.145267 containerd[1503]: time="2025-12-16T12:31:36.145165825Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:31:36.145267 containerd[1503]: time="2025-12-16T12:31:36.145177168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:31:36.145267 containerd[1503]: time="2025-12-16T12:31:36.145188592Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:31:36.145267 containerd[1503]: time="2025-12-16T12:31:36.145199732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:31:36.145267 containerd[1503]: time="2025-12-16T12:31:36.145211766Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:31:36.145267 containerd[1503]: time="2025-12-16T12:31:36.145222418Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:31:36.145267 containerd[1503]: time="2025-12-16T12:31:36.145232988Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:31:36.145725 containerd[1503]: time="2025-12-16T12:31:36.145430047Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:31:36.145725 containerd[1503]: time="2025-12-16T12:31:36.145447204Z" level=info msg="Start snapshots syncer" Dec 16 12:31:36.145725 containerd[1503]: time="2025-12-16T12:31:36.145473143Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:31:36.146174 containerd[1503]: time="2025-12-16T12:31:36.146103635Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:31:36.146315 containerd[1503]: time="2025-12-16T12:31:36.146173929Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:31:36.146474 containerd[1503]: time="2025-12-16T12:31:36.146349359Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:31:36.146512 containerd[1503]: time="2025-12-16T12:31:36.146483320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:31:36.146531 containerd[1503]: time="2025-12-16T12:31:36.146512308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:31:36.146531 containerd[1503]: time="2025-12-16T12:31:36.146527635Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:31:36.147855 containerd[1503]: time="2025-12-16T12:31:36.147612375Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:31:36.147855 containerd[1503]: time="2025-12-16T12:31:36.147682669Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:31:36.147855 containerd[1503]: time="2025-12-16T12:31:36.147717186Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:31:36.147855 containerd[1503]: time="2025-12-16T12:31:36.147736010Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:31:36.147855 containerd[1503]: time="2025-12-16T12:31:36.147773779Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:31:36.147855 containerd[1503]: time="2025-12-16T12:31:36.147791709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:31:36.147855 containerd[1503]: time="2025-12-16T12:31:36.147809516Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:31:36.148022 containerd[1503]: time="2025-12-16T12:31:36.147865702Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:31:36.148022 containerd[1503]: time="2025-12-16T12:31:36.147889486Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:31:36.148022 containerd[1503]: time="2025-12-16T12:31:36.147904081Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:31:36.148022 containerd[1503]: time="2025-12-16T12:31:36.147918230Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:31:36.148022 containerd[1503]: time="2025-12-16T12:31:36.147926646Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:31:36.148022 containerd[1503]: time="2025-12-16T12:31:36.147939737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:31:36.148022 containerd[1503]: time="2025-12-16T12:31:36.147955227Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:31:36.148137 containerd[1503]: time="2025-12-16T12:31:36.148041946Z" level=info msg="runtime interface created" Dec 16 12:31:36.148137 containerd[1503]: time="2025-12-16T12:31:36.148051378Z" level=info msg="created NRI interface" Dec 16 12:31:36.148137 containerd[1503]: time="2025-12-16T12:31:36.148061095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:31:36.148137 containerd[1503]: time="2025-12-16T12:31:36.148078577Z" level=info msg="Connect containerd service" Dec 16 12:31:36.148137 containerd[1503]: time="2025-12-16T12:31:36.148107727Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:31:36.150574 containerd[1503]: time="2025-12-16T12:31:36.150502767Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:31:36.223634 containerd[1503]: time="2025-12-16T12:31:36.223511633Z" level=info msg="Start subscribing containerd event" Dec 16 12:31:36.223634 containerd[1503]: time="2025-12-16T12:31:36.223599734Z" level=info msg="Start recovering state" Dec 16 12:31:36.223750 containerd[1503]: time="2025-12-16T12:31:36.223697593Z" level=info msg="Start event monitor" Dec 16 12:31:36.223750 containerd[1503]: time="2025-12-16T12:31:36.223712473Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:31:36.223750 containerd[1503]: time="2025-12-16T12:31:36.223719100Z" level=info msg="Start streaming server" Dec 16 12:31:36.223750 containerd[1503]: time="2025-12-16T12:31:36.223727800Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:31:36.223750 containerd[1503]: time="2025-12-16T12:31:36.223734915Z" level=info msg="runtime interface starting up..." Dec 16 12:31:36.223750 containerd[1503]: time="2025-12-16T12:31:36.223740322Z" level=info msg="starting plugins..." Dec 16 12:31:36.223869 containerd[1503]: time="2025-12-16T12:31:36.223754186Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:31:36.225710 containerd[1503]: time="2025-12-16T12:31:36.224039103Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:31:36.225710 containerd[1503]: time="2025-12-16T12:31:36.224085207Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:31:36.225710 containerd[1503]: time="2025-12-16T12:31:36.224136352Z" level=info msg="containerd successfully booted in 0.096588s" Dec 16 12:31:36.224238 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:31:36.252208 tar[1500]: linux-arm64/README.md Dec 16 12:31:36.271328 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:31:37.098571 sshd_keygen[1497]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:31:37.121319 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:31:37.124432 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:31:37.149000 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:31:37.149213 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:31:37.154190 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:31:37.182781 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:31:37.185713 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:31:37.187970 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:31:37.189386 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:31:37.736453 systemd-networkd[1437]: eth0: Gained IPv6LL Dec 16 12:31:37.741892 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:31:37.743834 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:31:37.746480 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 12:31:37.749225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:31:37.759298 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:31:37.780461 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:31:37.782591 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 12:31:37.782810 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 12:31:37.785180 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:31:38.290459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:31:38.291998 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:31:38.293842 systemd[1]: Startup finished in 2.095s (kernel) + 4.527s (initrd) + 4.205s (userspace) = 10.828s. Dec 16 12:31:38.313718 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:31:38.645809 kubelet[1611]: E1216 12:31:38.645708 1611 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:31:38.648442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:31:38.648576 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:31:38.648950 systemd[1]: kubelet.service: Consumed 685ms CPU time, 249.5M memory peak. Dec 16 12:31:42.677759 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:31:42.678850 systemd[1]: Started sshd@0-10.0.0.67:22-10.0.0.1:43906.service - OpenSSH per-connection server daemon (10.0.0.1:43906). Dec 16 12:31:42.740907 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 43906 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:31:42.742840 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:42.754454 systemd-logind[1485]: New session 1 of user core. Dec 16 12:31:42.755392 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:31:42.756442 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:31:42.785016 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:31:42.787085 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:31:42.812529 (systemd)[1630]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:31:42.814917 systemd-logind[1485]: New session c1 of user core. Dec 16 12:31:42.935184 systemd[1630]: Queued start job for default target default.target. Dec 16 12:31:42.954226 systemd[1630]: Created slice app.slice - User Application Slice. Dec 16 12:31:42.954283 systemd[1630]: Reached target paths.target - Paths. Dec 16 12:31:42.954323 systemd[1630]: Reached target timers.target - Timers. Dec 16 12:31:42.955533 systemd[1630]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:31:42.965513 systemd[1630]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:31:42.965578 systemd[1630]: Reached target sockets.target - Sockets. Dec 16 12:31:42.965618 systemd[1630]: Reached target basic.target - Basic System. Dec 16 12:31:42.965647 systemd[1630]: Reached target default.target - Main User Target. Dec 16 12:31:42.965671 systemd[1630]: Startup finished in 144ms. Dec 16 12:31:42.965773 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:31:42.967141 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:31:43.028425 systemd[1]: Started sshd@1-10.0.0.67:22-10.0.0.1:43922.service - OpenSSH per-connection server daemon (10.0.0.1:43922). Dec 16 12:31:43.094004 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 43922 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:31:43.095305 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:43.099619 systemd-logind[1485]: New session 2 of user core. Dec 16 12:31:43.111442 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:31:43.166162 sshd[1644]: Connection closed by 10.0.0.1 port 43922 Dec 16 12:31:43.166010 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:43.183232 systemd[1]: sshd@1-10.0.0.67:22-10.0.0.1:43922.service: Deactivated successfully. Dec 16 12:31:43.184769 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 12:31:43.187446 systemd-logind[1485]: Session 2 logged out. Waiting for processes to exit. Dec 16 12:31:43.188710 systemd[1]: Started sshd@2-10.0.0.67:22-10.0.0.1:43930.service - OpenSSH per-connection server daemon (10.0.0.1:43930). Dec 16 12:31:43.189539 systemd-logind[1485]: Removed session 2. Dec 16 12:31:43.241269 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 43930 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:31:43.242554 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:43.247192 systemd-logind[1485]: New session 3 of user core. Dec 16 12:31:43.266497 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:31:43.314005 sshd[1653]: Connection closed by 10.0.0.1 port 43930 Dec 16 12:31:43.314416 sshd-session[1650]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:43.323490 systemd[1]: sshd@2-10.0.0.67:22-10.0.0.1:43930.service: Deactivated successfully. Dec 16 12:31:43.325190 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:31:43.326329 systemd-logind[1485]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:31:43.329883 systemd[1]: Started sshd@3-10.0.0.67:22-10.0.0.1:43938.service - OpenSSH per-connection server daemon (10.0.0.1:43938). Dec 16 12:31:43.330836 systemd-logind[1485]: Removed session 3. Dec 16 12:31:43.387817 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 43938 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:31:43.389284 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:43.394470 systemd-logind[1485]: New session 4 of user core. Dec 16 12:31:43.409481 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:31:43.462183 sshd[1664]: Connection closed by 10.0.0.1 port 43938 Dec 16 12:31:43.461846 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:43.474219 systemd[1]: sshd@3-10.0.0.67:22-10.0.0.1:43938.service: Deactivated successfully. Dec 16 12:31:43.477502 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:31:43.478202 systemd-logind[1485]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:31:43.480364 systemd[1]: Started sshd@4-10.0.0.67:22-10.0.0.1:43948.service - OpenSSH per-connection server daemon (10.0.0.1:43948). Dec 16 12:31:43.480779 systemd-logind[1485]: Removed session 4. Dec 16 12:31:43.525442 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 43948 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:31:43.526657 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:43.532336 systemd-logind[1485]: New session 5 of user core. Dec 16 12:31:43.546477 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:31:43.605874 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:31:43.606145 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:31:43.910510 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:31:43.923627 (dockerd)[1694]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:31:44.160970 dockerd[1694]: time="2025-12-16T12:31:44.160825612Z" level=info msg="Starting up" Dec 16 12:31:44.162489 dockerd[1694]: time="2025-12-16T12:31:44.162453895Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:31:44.175336 dockerd[1694]: time="2025-12-16T12:31:44.175279957Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:31:44.337558 dockerd[1694]: time="2025-12-16T12:31:44.337496954Z" level=info msg="Loading containers: start." Dec 16 12:31:44.345287 kernel: Initializing XFRM netlink socket Dec 16 12:31:44.563399 systemd-networkd[1437]: docker0: Link UP Dec 16 12:31:44.573215 dockerd[1694]: time="2025-12-16T12:31:44.573161827Z" level=info msg="Loading containers: done." Dec 16 12:31:44.585695 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1118997585-merged.mount: Deactivated successfully. Dec 16 12:31:44.591649 dockerd[1694]: time="2025-12-16T12:31:44.591589032Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:31:44.591760 dockerd[1694]: time="2025-12-16T12:31:44.591689394Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:31:44.591803 dockerd[1694]: time="2025-12-16T12:31:44.591781228Z" level=info msg="Initializing buildkit" Dec 16 12:31:44.619444 dockerd[1694]: time="2025-12-16T12:31:44.619385290Z" level=info msg="Completed buildkit initialization" Dec 16 12:31:44.626817 dockerd[1694]: time="2025-12-16T12:31:44.626768899Z" level=info msg="Daemon has completed initialization" Dec 16 12:31:44.627063 dockerd[1694]: time="2025-12-16T12:31:44.626936116Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:31:44.627185 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:31:45.092974 containerd[1503]: time="2025-12-16T12:31:45.092932302Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 12:31:45.708984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4063911424.mount: Deactivated successfully. Dec 16 12:31:46.553265 containerd[1503]: time="2025-12-16T12:31:46.553203892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:46.554296 containerd[1503]: time="2025-12-16T12:31:46.554272682Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571042" Dec 16 12:31:46.554680 containerd[1503]: time="2025-12-16T12:31:46.554656652Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:46.557226 containerd[1503]: time="2025-12-16T12:31:46.557168481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:46.558388 containerd[1503]: time="2025-12-16T12:31:46.558126315Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 1.465149722s" Dec 16 12:31:46.558388 containerd[1503]: time="2025-12-16T12:31:46.558163394Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Dec 16 12:31:46.558739 containerd[1503]: time="2025-12-16T12:31:46.558664386Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 12:31:47.538211 containerd[1503]: time="2025-12-16T12:31:47.537267827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:47.538211 containerd[1503]: time="2025-12-16T12:31:47.537887316Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135479" Dec 16 12:31:47.538932 containerd[1503]: time="2025-12-16T12:31:47.538896793Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:47.541976 containerd[1503]: time="2025-12-16T12:31:47.541934817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:47.543003 containerd[1503]: time="2025-12-16T12:31:47.542968102Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 984.272025ms" Dec 16 12:31:47.543003 containerd[1503]: time="2025-12-16T12:31:47.543000504Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Dec 16 12:31:47.543510 containerd[1503]: time="2025-12-16T12:31:47.543477779Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 12:31:48.403209 containerd[1503]: time="2025-12-16T12:31:48.403164819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:48.405137 containerd[1503]: time="2025-12-16T12:31:48.405104123Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191718" Dec 16 12:31:48.406331 containerd[1503]: time="2025-12-16T12:31:48.406283686Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:48.410462 containerd[1503]: time="2025-12-16T12:31:48.409349137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:48.410462 containerd[1503]: time="2025-12-16T12:31:48.410345457Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 866.82998ms" Dec 16 12:31:48.410462 containerd[1503]: time="2025-12-16T12:31:48.410377924Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Dec 16 12:31:48.411109 containerd[1503]: time="2025-12-16T12:31:48.411081440Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 12:31:48.817558 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:31:48.818922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:31:48.964455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:31:48.968471 (kubelet)[1985]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:31:49.017003 kubelet[1985]: E1216 12:31:49.016925 1985 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:31:49.019972 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:31:49.020109 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:31:49.022365 systemd[1]: kubelet.service: Consumed 151ms CPU time, 108.5M memory peak. Dec 16 12:31:49.561264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4256288247.mount: Deactivated successfully. Dec 16 12:31:49.740498 containerd[1503]: time="2025-12-16T12:31:49.740309417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:49.741667 containerd[1503]: time="2025-12-16T12:31:49.741481555Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805255" Dec 16 12:31:49.742380 containerd[1503]: time="2025-12-16T12:31:49.742343399Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:49.744339 containerd[1503]: time="2025-12-16T12:31:49.744307177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:49.745057 containerd[1503]: time="2025-12-16T12:31:49.744999091Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.333882659s" Dec 16 12:31:49.745057 containerd[1503]: time="2025-12-16T12:31:49.745038806Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Dec 16 12:31:49.745674 containerd[1503]: time="2025-12-16T12:31:49.745648001Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 12:31:50.284986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2454951673.mount: Deactivated successfully. Dec 16 12:31:51.046110 containerd[1503]: time="2025-12-16T12:31:51.046061340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:51.046984 containerd[1503]: time="2025-12-16T12:31:51.046949501Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Dec 16 12:31:51.048261 containerd[1503]: time="2025-12-16T12:31:51.048117680Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:51.051076 containerd[1503]: time="2025-12-16T12:31:51.051042737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:51.052325 containerd[1503]: time="2025-12-16T12:31:51.052277423Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.306596814s" Dec 16 12:31:51.052442 containerd[1503]: time="2025-12-16T12:31:51.052309775Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Dec 16 12:31:51.052863 containerd[1503]: time="2025-12-16T12:31:51.052838021Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 12:31:51.516005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3290450121.mount: Deactivated successfully. Dec 16 12:31:51.522901 containerd[1503]: time="2025-12-16T12:31:51.522844626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:51.523662 containerd[1503]: time="2025-12-16T12:31:51.523468644Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Dec 16 12:31:51.524537 containerd[1503]: time="2025-12-16T12:31:51.524506054Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:51.526622 containerd[1503]: time="2025-12-16T12:31:51.526591298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:51.527489 containerd[1503]: time="2025-12-16T12:31:51.527279537Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 474.410488ms" Dec 16 12:31:51.527489 containerd[1503]: time="2025-12-16T12:31:51.527310726Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Dec 16 12:31:51.527804 containerd[1503]: time="2025-12-16T12:31:51.527754426Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 12:31:52.007240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3427791065.mount: Deactivated successfully. Dec 16 12:31:54.174748 containerd[1503]: time="2025-12-16T12:31:54.174694305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:54.175946 containerd[1503]: time="2025-12-16T12:31:54.175908622Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062989" Dec 16 12:31:54.176776 containerd[1503]: time="2025-12-16T12:31:54.176709047Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:54.180050 containerd[1503]: time="2025-12-16T12:31:54.180003443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:31:54.181115 containerd[1503]: time="2025-12-16T12:31:54.181079997Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 2.653294228s" Dec 16 12:31:54.181193 containerd[1503]: time="2025-12-16T12:31:54.181119655Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Dec 16 12:31:59.067425 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:31:59.068829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:31:59.327019 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:31:59.338630 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:31:59.357528 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:31:59.360199 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:31:59.360456 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:31:59.360694 systemd[1]: kubelet.service: Consumed 121ms CPU time, 102.7M memory peak. Dec 16 12:31:59.365465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:31:59.395328 systemd[1]: Reload requested from client PID 2159 ('systemctl') (unit session-5.scope)... Dec 16 12:31:59.395343 systemd[1]: Reloading... Dec 16 12:31:59.470298 zram_generator::config[2201]: No configuration found. Dec 16 12:31:59.829343 systemd[1]: Reloading finished in 433 ms. Dec 16 12:31:59.893768 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:31:59.893839 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:31:59.894053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:31:59.894092 systemd[1]: kubelet.service: Consumed 96ms CPU time, 95M memory peak. Dec 16 12:31:59.895444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:32:00.023109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:32:00.039627 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:32:00.074752 kubelet[2246]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:32:00.074752 kubelet[2246]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:32:00.075442 kubelet[2246]: I1216 12:32:00.075386 2246 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:32:00.904110 kubelet[2246]: I1216 12:32:00.903982 2246 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 12:32:00.904110 kubelet[2246]: I1216 12:32:00.904015 2246 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:32:00.907240 kubelet[2246]: I1216 12:32:00.907223 2246 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 12:32:00.907350 kubelet[2246]: I1216 12:32:00.907336 2246 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:32:00.907626 kubelet[2246]: I1216 12:32:00.907613 2246 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:32:00.989964 kubelet[2246]: E1216 12:32:00.989387 2246 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:32:00.993591 kubelet[2246]: I1216 12:32:00.992883 2246 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:32:01.003109 kubelet[2246]: I1216 12:32:00.999463 2246 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:32:01.003109 kubelet[2246]: I1216 12:32:01.002320 2246 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 12:32:01.003109 kubelet[2246]: I1216 12:32:01.002553 2246 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:32:01.003109 kubelet[2246]: I1216 12:32:01.002574 2246 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:32:01.003349 kubelet[2246]: I1216 12:32:01.002731 2246 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:32:01.003349 kubelet[2246]: I1216 12:32:01.002739 2246 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 12:32:01.003349 kubelet[2246]: I1216 12:32:01.002857 2246 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 12:32:01.009093 kubelet[2246]: I1216 12:32:01.005703 2246 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:32:01.010874 kubelet[2246]: I1216 12:32:01.010112 2246 kubelet.go:475] "Attempting to sync node with API server" Dec 16 12:32:01.010874 kubelet[2246]: I1216 12:32:01.010802 2246 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:32:01.011039 kubelet[2246]: E1216 12:32:01.010752 2246 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:32:01.011417 kubelet[2246]: I1216 12:32:01.011383 2246 kubelet.go:387] "Adding apiserver pod source" Dec 16 12:32:01.011417 kubelet[2246]: I1216 12:32:01.011410 2246 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:32:01.012851 kubelet[2246]: E1216 12:32:01.012824 2246 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:32:01.013377 kubelet[2246]: I1216 12:32:01.013354 2246 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:32:01.015869 kubelet[2246]: I1216 12:32:01.015836 2246 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:32:01.016021 kubelet[2246]: I1216 12:32:01.016009 2246 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 12:32:01.016116 kubelet[2246]: W1216 12:32:01.016106 2246 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:32:01.018954 kubelet[2246]: I1216 12:32:01.018934 2246 server.go:1262] "Started kubelet" Dec 16 12:32:01.019517 kubelet[2246]: I1216 12:32:01.019464 2246 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:32:01.019570 kubelet[2246]: I1216 12:32:01.019522 2246 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 12:32:01.019817 kubelet[2246]: I1216 12:32:01.019788 2246 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:32:01.019944 kubelet[2246]: I1216 12:32:01.019928 2246 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:32:01.020164 kubelet[2246]: I1216 12:32:01.020121 2246 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:32:01.022330 kubelet[2246]: I1216 12:32:01.022302 2246 server.go:310] "Adding debug handlers to kubelet server" Dec 16 12:32:01.025138 kubelet[2246]: I1216 12:32:01.024312 2246 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 12:32:01.025138 kubelet[2246]: E1216 12:32:01.024827 2246 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:32:01.025667 kubelet[2246]: I1216 12:32:01.025635 2246 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 12:32:01.025778 kubelet[2246]: I1216 12:32:01.025761 2246 reconciler.go:29] "Reconciler: start to sync state" Dec 16 12:32:01.026414 kubelet[2246]: E1216 12:32:01.026378 2246 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:32:01.026480 kubelet[2246]: E1216 12:32:01.026460 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="200ms" Dec 16 12:32:01.026519 kubelet[2246]: I1216 12:32:01.026503 2246 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:32:01.028716 kubelet[2246]: I1216 12:32:01.028680 2246 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 12:32:01.029703 kubelet[2246]: I1216 12:32:01.029663 2246 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:32:01.029807 kubelet[2246]: I1216 12:32:01.029785 2246 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:32:01.032439 kubelet[2246]: E1216 12:32:01.026321 2246 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b21267578ab1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 12:32:01.018890929 +0000 UTC m=+0.976324585,LastTimestamp:2025-12-16 12:32:01.018890929 +0000 UTC m=+0.976324585,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 12:32:01.032872 kubelet[2246]: I1216 12:32:01.032847 2246 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:32:01.037072 kubelet[2246]: E1216 12:32:01.037032 2246 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:32:01.044872 kubelet[2246]: I1216 12:32:01.044851 2246 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:32:01.045016 kubelet[2246]: I1216 12:32:01.045003 2246 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:32:01.045081 kubelet[2246]: I1216 12:32:01.045073 2246 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:32:01.047371 kubelet[2246]: I1216 12:32:01.047349 2246 policy_none.go:49] "None policy: Start" Dec 16 12:32:01.047489 kubelet[2246]: I1216 12:32:01.047474 2246 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 12:32:01.047547 kubelet[2246]: I1216 12:32:01.047537 2246 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 12:32:01.048683 kubelet[2246]: I1216 12:32:01.048642 2246 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 12:32:01.048758 kubelet[2246]: I1216 12:32:01.048709 2246 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 12:32:01.048758 kubelet[2246]: I1216 12:32:01.048745 2246 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 12:32:01.048855 kubelet[2246]: E1216 12:32:01.048788 2246 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:32:01.049836 kubelet[2246]: I1216 12:32:01.049154 2246 policy_none.go:47] "Start" Dec 16 12:32:01.050466 kubelet[2246]: E1216 12:32:01.050435 2246 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:32:01.054231 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:32:01.068657 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:32:01.080788 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:32:01.082679 kubelet[2246]: E1216 12:32:01.082643 2246 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:32:01.083007 kubelet[2246]: I1216 12:32:01.082986 2246 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:32:01.083038 kubelet[2246]: I1216 12:32:01.083002 2246 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:32:01.083218 kubelet[2246]: I1216 12:32:01.083195 2246 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:32:01.084348 kubelet[2246]: E1216 12:32:01.084324 2246 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:32:01.084402 kubelet[2246]: E1216 12:32:01.084378 2246 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 12:32:01.160563 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Dec 16 12:32:01.184919 kubelet[2246]: I1216 12:32:01.184876 2246 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:32:01.185388 kubelet[2246]: E1216 12:32:01.185351 2246 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Dec 16 12:32:01.187938 kubelet[2246]: E1216 12:32:01.187915 2246 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:32:01.190462 systemd[1]: Created slice kubepods-burstable-pod7b501c8dfbe5248ad6ae01d87b845908.slice - libcontainer container kubepods-burstable-pod7b501c8dfbe5248ad6ae01d87b845908.slice. Dec 16 12:32:01.203558 kubelet[2246]: E1216 12:32:01.203518 2246 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:32:01.205757 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Dec 16 12:32:01.207770 kubelet[2246]: E1216 12:32:01.207749 2246 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:32:01.227167 kubelet[2246]: I1216 12:32:01.226938 2246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:32:01.227167 kubelet[2246]: I1216 12:32:01.226976 2246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:32:01.227167 kubelet[2246]: I1216 12:32:01.227037 2246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b501c8dfbe5248ad6ae01d87b845908-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b501c8dfbe5248ad6ae01d87b845908\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:32:01.227167 kubelet[2246]: I1216 12:32:01.227094 2246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:32:01.227167 kubelet[2246]: I1216 12:32:01.227119 2246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:32:01.227398 kubelet[2246]: I1216 12:32:01.227137 2246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:32:01.227398 kubelet[2246]: I1216 12:32:01.227201 2246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b501c8dfbe5248ad6ae01d87b845908-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b501c8dfbe5248ad6ae01d87b845908\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:32:01.227398 kubelet[2246]: I1216 12:32:01.227216 2246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b501c8dfbe5248ad6ae01d87b845908-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7b501c8dfbe5248ad6ae01d87b845908\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:32:01.227398 kubelet[2246]: I1216 12:32:01.227293 2246 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:32:01.228396 kubelet[2246]: E1216 12:32:01.228351 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="400ms" Dec 16 12:32:01.387055 kubelet[2246]: I1216 12:32:01.387008 2246 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:32:01.387413 kubelet[2246]: E1216 12:32:01.387368 2246 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Dec 16 12:32:01.493557 containerd[1503]: time="2025-12-16T12:32:01.493059454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Dec 16 12:32:01.506579 containerd[1503]: time="2025-12-16T12:32:01.506530011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7b501c8dfbe5248ad6ae01d87b845908,Namespace:kube-system,Attempt:0,}" Dec 16 12:32:01.509943 containerd[1503]: time="2025-12-16T12:32:01.509905655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Dec 16 12:32:01.629484 kubelet[2246]: E1216 12:32:01.629404 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="800ms" Dec 16 12:32:01.789524 kubelet[2246]: I1216 12:32:01.789389 2246 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:32:01.789739 kubelet[2246]: E1216 12:32:01.789708 2246 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Dec 16 12:32:01.963394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount395306518.mount: Deactivated successfully. Dec 16 12:32:01.970207 containerd[1503]: time="2025-12-16T12:32:01.969714506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:32:01.972335 containerd[1503]: time="2025-12-16T12:32:01.972291485Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 16 12:32:01.973188 kubelet[2246]: E1216 12:32:01.973126 2246 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:32:01.973978 containerd[1503]: time="2025-12-16T12:32:01.973943766Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:32:01.977312 containerd[1503]: time="2025-12-16T12:32:01.976904049Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:32:01.977312 containerd[1503]: time="2025-12-16T12:32:01.976920178Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:32:01.977741 containerd[1503]: time="2025-12-16T12:32:01.977689065Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:32:01.978432 containerd[1503]: time="2025-12-16T12:32:01.978215051Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:32:01.979900 containerd[1503]: time="2025-12-16T12:32:01.979865892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:32:01.982110 containerd[1503]: time="2025-12-16T12:32:01.982060208Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 485.623989ms" Dec 16 12:32:01.983070 containerd[1503]: time="2025-12-16T12:32:01.983022208Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 471.847215ms" Dec 16 12:32:01.985295 containerd[1503]: time="2025-12-16T12:32:01.985264392Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 477.488977ms" Dec 16 12:32:01.997287 kubelet[2246]: E1216 12:32:01.995881 2246 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:32:02.010903 containerd[1503]: time="2025-12-16T12:32:02.010292455Z" level=info msg="connecting to shim 0630db23e948820c428f4749ae4f90f0e79286292d59249d3a9144b65501c790" address="unix:///run/containerd/s/7deabc5bdbae4390911121c3197f23784ba52c11402bae9043487d3257e134c9" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:32:02.010903 containerd[1503]: time="2025-12-16T12:32:02.010292935Z" level=info msg="connecting to shim 7a3fa6fd37f4e93e0b081a2c467bd7146cd9b9fd2273276007e769ce5339e96d" address="unix:///run/containerd/s/0d68c0c33428638e2b35692f57cc7f33d4553b167d6981a7c3af244bcdf9e693" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:32:02.017034 containerd[1503]: time="2025-12-16T12:32:02.016396802Z" level=info msg="connecting to shim 3a3f8f87a1386989b9bb5dc8bcbdd5548f0704d982de540655fc73df304863ec" address="unix:///run/containerd/s/922c155f9d0aefe92a6d3e9c60aff7f7f9298ced50bea66fcb05c2873b7da432" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:32:02.035420 systemd[1]: Started cri-containerd-0630db23e948820c428f4749ae4f90f0e79286292d59249d3a9144b65501c790.scope - libcontainer container 0630db23e948820c428f4749ae4f90f0e79286292d59249d3a9144b65501c790. Dec 16 12:32:02.039633 systemd[1]: Started cri-containerd-3a3f8f87a1386989b9bb5dc8bcbdd5548f0704d982de540655fc73df304863ec.scope - libcontainer container 3a3f8f87a1386989b9bb5dc8bcbdd5548f0704d982de540655fc73df304863ec. Dec 16 12:32:02.041040 systemd[1]: Started cri-containerd-7a3fa6fd37f4e93e0b081a2c467bd7146cd9b9fd2273276007e769ce5339e96d.scope - libcontainer container 7a3fa6fd37f4e93e0b081a2c467bd7146cd9b9fd2273276007e769ce5339e96d. Dec 16 12:32:02.088673 kubelet[2246]: E1216 12:32:02.088618 2246 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:32:02.091863 containerd[1503]: time="2025-12-16T12:32:02.091816235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"0630db23e948820c428f4749ae4f90f0e79286292d59249d3a9144b65501c790\"" Dec 16 12:32:02.122783 kubelet[2246]: E1216 12:32:02.122743 2246 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:32:02.132343 containerd[1503]: time="2025-12-16T12:32:02.132295722Z" level=info msg="CreateContainer within sandbox \"0630db23e948820c428f4749ae4f90f0e79286292d59249d3a9144b65501c790\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:32:02.132744 containerd[1503]: time="2025-12-16T12:32:02.132720739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7b501c8dfbe5248ad6ae01d87b845908,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a3f8f87a1386989b9bb5dc8bcbdd5548f0704d982de540655fc73df304863ec\"" Dec 16 12:32:02.135092 containerd[1503]: time="2025-12-16T12:32:02.135065892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a3fa6fd37f4e93e0b081a2c467bd7146cd9b9fd2273276007e769ce5339e96d\"" Dec 16 12:32:02.138156 containerd[1503]: time="2025-12-16T12:32:02.138131933Z" level=info msg="CreateContainer within sandbox \"3a3f8f87a1386989b9bb5dc8bcbdd5548f0704d982de540655fc73df304863ec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:32:02.139647 containerd[1503]: time="2025-12-16T12:32:02.139615368Z" level=info msg="CreateContainer within sandbox \"7a3fa6fd37f4e93e0b081a2c467bd7146cd9b9fd2273276007e769ce5339e96d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:32:02.142715 containerd[1503]: time="2025-12-16T12:32:02.142673445Z" level=info msg="Container fc80ccf278e3c1fac59b55cab51131896a7b5db996c5256379021b89b1fdb49a: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:32:02.146250 containerd[1503]: time="2025-12-16T12:32:02.146209125Z" level=info msg="Container d9a818e4d26630f4756b2d755e0e8b931ed735e80d5a4f9207a172b37368e986: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:32:02.151147 containerd[1503]: time="2025-12-16T12:32:02.151105578Z" level=info msg="CreateContainer within sandbox \"0630db23e948820c428f4749ae4f90f0e79286292d59249d3a9144b65501c790\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fc80ccf278e3c1fac59b55cab51131896a7b5db996c5256379021b89b1fdb49a\"" Dec 16 12:32:02.151466 containerd[1503]: time="2025-12-16T12:32:02.151438707Z" level=info msg="Container 83effe6f8e76772c7f7d9992d0d8ee89df696ff3fb6003fac1c8514900ede490: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:32:02.151938 containerd[1503]: time="2025-12-16T12:32:02.151897821Z" level=info msg="StartContainer for \"fc80ccf278e3c1fac59b55cab51131896a7b5db996c5256379021b89b1fdb49a\"" Dec 16 12:32:02.153489 containerd[1503]: time="2025-12-16T12:32:02.153462858Z" level=info msg="connecting to shim fc80ccf278e3c1fac59b55cab51131896a7b5db996c5256379021b89b1fdb49a" address="unix:///run/containerd/s/7deabc5bdbae4390911121c3197f23784ba52c11402bae9043487d3257e134c9" protocol=ttrpc version=3 Dec 16 12:32:02.156985 containerd[1503]: time="2025-12-16T12:32:02.156948072Z" level=info msg="CreateContainer within sandbox \"3a3f8f87a1386989b9bb5dc8bcbdd5548f0704d982de540655fc73df304863ec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d9a818e4d26630f4756b2d755e0e8b931ed735e80d5a4f9207a172b37368e986\"" Dec 16 12:32:02.157823 containerd[1503]: time="2025-12-16T12:32:02.157794343Z" level=info msg="StartContainer for \"d9a818e4d26630f4756b2d755e0e8b931ed735e80d5a4f9207a172b37368e986\"" Dec 16 12:32:02.158853 containerd[1503]: time="2025-12-16T12:32:02.158821345Z" level=info msg="connecting to shim d9a818e4d26630f4756b2d755e0e8b931ed735e80d5a4f9207a172b37368e986" address="unix:///run/containerd/s/922c155f9d0aefe92a6d3e9c60aff7f7f9298ced50bea66fcb05c2873b7da432" protocol=ttrpc version=3 Dec 16 12:32:02.159959 containerd[1503]: time="2025-12-16T12:32:02.159827618Z" level=info msg="CreateContainer within sandbox \"7a3fa6fd37f4e93e0b081a2c467bd7146cd9b9fd2273276007e769ce5339e96d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"83effe6f8e76772c7f7d9992d0d8ee89df696ff3fb6003fac1c8514900ede490\"" Dec 16 12:32:02.160333 containerd[1503]: time="2025-12-16T12:32:02.160312224Z" level=info msg="StartContainer for \"83effe6f8e76772c7f7d9992d0d8ee89df696ff3fb6003fac1c8514900ede490\"" Dec 16 12:32:02.161431 containerd[1503]: time="2025-12-16T12:32:02.161399578Z" level=info msg="connecting to shim 83effe6f8e76772c7f7d9992d0d8ee89df696ff3fb6003fac1c8514900ede490" address="unix:///run/containerd/s/0d68c0c33428638e2b35692f57cc7f33d4553b167d6981a7c3af244bcdf9e693" protocol=ttrpc version=3 Dec 16 12:32:02.178412 systemd[1]: Started cri-containerd-fc80ccf278e3c1fac59b55cab51131896a7b5db996c5256379021b89b1fdb49a.scope - libcontainer container fc80ccf278e3c1fac59b55cab51131896a7b5db996c5256379021b89b1fdb49a. Dec 16 12:32:02.187398 systemd[1]: Started cri-containerd-83effe6f8e76772c7f7d9992d0d8ee89df696ff3fb6003fac1c8514900ede490.scope - libcontainer container 83effe6f8e76772c7f7d9992d0d8ee89df696ff3fb6003fac1c8514900ede490. Dec 16 12:32:02.189342 systemd[1]: Started cri-containerd-d9a818e4d26630f4756b2d755e0e8b931ed735e80d5a4f9207a172b37368e986.scope - libcontainer container d9a818e4d26630f4756b2d755e0e8b931ed735e80d5a4f9207a172b37368e986. Dec 16 12:32:02.230361 containerd[1503]: time="2025-12-16T12:32:02.230319102Z" level=info msg="StartContainer for \"fc80ccf278e3c1fac59b55cab51131896a7b5db996c5256379021b89b1fdb49a\" returns successfully" Dec 16 12:32:02.251095 containerd[1503]: time="2025-12-16T12:32:02.251048095Z" level=info msg="StartContainer for \"83effe6f8e76772c7f7d9992d0d8ee89df696ff3fb6003fac1c8514900ede490\" returns successfully" Dec 16 12:32:02.258681 containerd[1503]: time="2025-12-16T12:32:02.258636998Z" level=info msg="StartContainer for \"d9a818e4d26630f4756b2d755e0e8b931ed735e80d5a4f9207a172b37368e986\" returns successfully" Dec 16 12:32:02.591061 kubelet[2246]: I1216 12:32:02.590984 2246 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:32:03.060891 kubelet[2246]: E1216 12:32:03.060504 2246 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:32:03.061602 kubelet[2246]: E1216 12:32:03.061428 2246 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:32:03.064495 kubelet[2246]: E1216 12:32:03.064475 2246 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:32:04.067694 kubelet[2246]: E1216 12:32:04.067559 2246 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:32:04.068624 kubelet[2246]: E1216 12:32:04.068604 2246 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:32:04.070356 kubelet[2246]: E1216 12:32:04.069046 2246 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:32:04.205293 kubelet[2246]: E1216 12:32:04.205015 2246 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 16 12:32:04.293356 kubelet[2246]: I1216 12:32:04.293311 2246 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:32:04.293356 kubelet[2246]: E1216 12:32:04.293352 2246 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 16 12:32:04.325758 kubelet[2246]: I1216 12:32:04.325632 2246 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:32:04.330958 kubelet[2246]: E1216 12:32:04.330914 2246 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 12:32:04.330958 kubelet[2246]: I1216 12:32:04.330950 2246 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:32:04.332670 kubelet[2246]: E1216 12:32:04.332640 2246 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:32:04.332670 kubelet[2246]: I1216 12:32:04.332668 2246 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:32:04.334215 kubelet[2246]: E1216 12:32:04.334182 2246 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 12:32:05.014373 kubelet[2246]: I1216 12:32:05.014118 2246 apiserver.go:52] "Watching apiserver" Dec 16 12:32:05.026389 kubelet[2246]: I1216 12:32:05.026351 2246 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 12:32:05.066961 kubelet[2246]: I1216 12:32:05.066925 2246 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:32:05.069015 kubelet[2246]: E1216 12:32:05.068975 2246 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 12:32:06.467212 systemd[1]: Reload requested from client PID 2533 ('systemctl') (unit session-5.scope)... Dec 16 12:32:06.467228 systemd[1]: Reloading... Dec 16 12:32:06.522310 zram_generator::config[2576]: No configuration found. Dec 16 12:32:06.695864 systemd[1]: Reloading finished in 228 ms. Dec 16 12:32:06.702204 kubelet[2246]: I1216 12:32:06.701933 2246 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:32:06.715992 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:32:06.735504 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:32:06.736301 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:32:06.736375 systemd[1]: kubelet.service: Consumed 1.277s CPU time, 123.7M memory peak. Dec 16 12:32:06.738212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:32:06.879032 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:32:06.884930 (kubelet)[2618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:32:06.929060 kubelet[2618]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:32:06.929060 kubelet[2618]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:32:06.929060 kubelet[2618]: I1216 12:32:06.928897 2618 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:32:06.936076 kubelet[2618]: I1216 12:32:06.936032 2618 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 12:32:06.936076 kubelet[2618]: I1216 12:32:06.936059 2618 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:32:06.936339 kubelet[2618]: I1216 12:32:06.936094 2618 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 12:32:06.936339 kubelet[2618]: I1216 12:32:06.936101 2618 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:32:06.936539 kubelet[2618]: I1216 12:32:06.936493 2618 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:32:06.939414 kubelet[2618]: I1216 12:32:06.939389 2618 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:32:06.941738 kubelet[2618]: I1216 12:32:06.941694 2618 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:32:06.944946 kubelet[2618]: I1216 12:32:06.944924 2618 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:32:06.948207 kubelet[2618]: I1216 12:32:06.948171 2618 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 12:32:06.948591 kubelet[2618]: I1216 12:32:06.948557 2618 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:32:06.948821 kubelet[2618]: I1216 12:32:06.948651 2618 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:32:06.948939 kubelet[2618]: I1216 12:32:06.948926 2618 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:32:06.949000 kubelet[2618]: I1216 12:32:06.948992 2618 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 12:32:06.949067 kubelet[2618]: I1216 12:32:06.949059 2618 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 12:32:06.950148 kubelet[2618]: I1216 12:32:06.950126 2618 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:32:06.950495 kubelet[2618]: I1216 12:32:06.950473 2618 kubelet.go:475] "Attempting to sync node with API server" Dec 16 12:32:06.950651 kubelet[2618]: I1216 12:32:06.950632 2618 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:32:06.950754 kubelet[2618]: I1216 12:32:06.950743 2618 kubelet.go:387] "Adding apiserver pod source" Dec 16 12:32:06.950823 kubelet[2618]: I1216 12:32:06.950814 2618 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:32:06.953848 kubelet[2618]: I1216 12:32:06.953823 2618 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:32:06.954558 kubelet[2618]: I1216 12:32:06.954530 2618 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:32:06.954605 kubelet[2618]: I1216 12:32:06.954568 2618 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 12:32:06.959754 kubelet[2618]: I1216 12:32:06.958712 2618 server.go:1262] "Started kubelet" Dec 16 12:32:06.964018 kubelet[2618]: I1216 12:32:06.963989 2618 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:32:06.969396 kubelet[2618]: I1216 12:32:06.969329 2618 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:32:06.970064 kubelet[2618]: I1216 12:32:06.970022 2618 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:32:06.970108 kubelet[2618]: I1216 12:32:06.970084 2618 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 12:32:06.970379 kubelet[2618]: I1216 12:32:06.970358 2618 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 12:32:06.970412 kubelet[2618]: I1216 12:32:06.970392 2618 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:32:06.970491 kubelet[2618]: E1216 12:32:06.970471 2618 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:32:06.970726 kubelet[2618]: I1216 12:32:06.970709 2618 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 12:32:06.970841 kubelet[2618]: I1216 12:32:06.970827 2618 reconciler.go:29] "Reconciler: start to sync state" Dec 16 12:32:06.974465 kubelet[2618]: I1216 12:32:06.974416 2618 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:32:06.974677 kubelet[2618]: I1216 12:32:06.974646 2618 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:32:06.974854 kubelet[2618]: I1216 12:32:06.969376 2618 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:32:06.975232 kubelet[2618]: E1216 12:32:06.975206 2618 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:32:06.976650 kubelet[2618]: I1216 12:32:06.976614 2618 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:32:06.977633 kubelet[2618]: I1216 12:32:06.977605 2618 server.go:310] "Adding debug handlers to kubelet server" Dec 16 12:32:06.985070 kubelet[2618]: I1216 12:32:06.985036 2618 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 12:32:06.986311 kubelet[2618]: I1216 12:32:06.986184 2618 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 12:32:06.986311 kubelet[2618]: I1216 12:32:06.986208 2618 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 12:32:06.986311 kubelet[2618]: I1216 12:32:06.986229 2618 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 12:32:06.986311 kubelet[2618]: E1216 12:32:06.986276 2618 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:32:07.014055 kubelet[2618]: I1216 12:32:07.014022 2618 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:32:07.015295 kubelet[2618]: I1216 12:32:07.014209 2618 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:32:07.015462 kubelet[2618]: I1216 12:32:07.015435 2618 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:32:07.016647 kubelet[2618]: I1216 12:32:07.016568 2618 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:32:07.016785 kubelet[2618]: I1216 12:32:07.016733 2618 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:32:07.016908 kubelet[2618]: I1216 12:32:07.016896 2618 policy_none.go:49] "None policy: Start" Dec 16 12:32:07.016966 kubelet[2618]: I1216 12:32:07.016959 2618 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 12:32:07.017020 kubelet[2618]: I1216 12:32:07.017011 2618 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 12:32:07.017192 kubelet[2618]: I1216 12:32:07.017177 2618 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 12:32:07.017267 kubelet[2618]: I1216 12:32:07.017239 2618 policy_none.go:47] "Start" Dec 16 12:32:07.021068 kubelet[2618]: E1216 12:32:07.021047 2618 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:32:07.021278 kubelet[2618]: I1216 12:32:07.021241 2618 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:32:07.021362 kubelet[2618]: I1216 12:32:07.021271 2618 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:32:07.021521 kubelet[2618]: I1216 12:32:07.021503 2618 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:32:07.022629 kubelet[2618]: E1216 12:32:07.022318 2618 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:32:07.087542 kubelet[2618]: I1216 12:32:07.087490 2618 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:32:07.087684 kubelet[2618]: I1216 12:32:07.087515 2618 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:32:07.087684 kubelet[2618]: I1216 12:32:07.087634 2618 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:32:07.093730 kubelet[2618]: E1216 12:32:07.093700 2618 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 12:32:07.124064 kubelet[2618]: I1216 12:32:07.124036 2618 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:32:07.131010 kubelet[2618]: I1216 12:32:07.130931 2618 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 12:32:07.131133 kubelet[2618]: I1216 12:32:07.131061 2618 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:32:07.172503 kubelet[2618]: I1216 12:32:07.172453 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b501c8dfbe5248ad6ae01d87b845908-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b501c8dfbe5248ad6ae01d87b845908\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:32:07.172503 kubelet[2618]: I1216 12:32:07.172500 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:32:07.172666 kubelet[2618]: I1216 12:32:07.172523 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:32:07.172666 kubelet[2618]: I1216 12:32:07.172538 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:32:07.172666 kubelet[2618]: I1216 12:32:07.172595 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:32:07.172727 kubelet[2618]: I1216 12:32:07.172688 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:32:07.172727 kubelet[2618]: I1216 12:32:07.172721 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b501c8dfbe5248ad6ae01d87b845908-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b501c8dfbe5248ad6ae01d87b845908\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:32:07.172774 kubelet[2618]: I1216 12:32:07.172736 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b501c8dfbe5248ad6ae01d87b845908-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7b501c8dfbe5248ad6ae01d87b845908\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:32:07.172774 kubelet[2618]: I1216 12:32:07.172752 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:32:07.951231 kubelet[2618]: I1216 12:32:07.951179 2618 apiserver.go:52] "Watching apiserver" Dec 16 12:32:07.971366 kubelet[2618]: I1216 12:32:07.971321 2618 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 12:32:08.000889 kubelet[2618]: I1216 12:32:08.000861 2618 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:32:08.001173 kubelet[2618]: I1216 12:32:08.001150 2618 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:32:08.008307 kubelet[2618]: E1216 12:32:08.007835 2618 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 12:32:08.008307 kubelet[2618]: E1216 12:32:08.008179 2618 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:32:08.022162 kubelet[2618]: I1216 12:32:08.022097 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.02208422 podStartE2EDuration="2.02208422s" podCreationTimestamp="2025-12-16 12:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:32:08.021954676 +0000 UTC m=+1.133779296" watchObservedRunningTime="2025-12-16 12:32:08.02208422 +0000 UTC m=+1.133908840" Dec 16 12:32:08.038744 kubelet[2618]: I1216 12:32:08.038376 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.038358297 podStartE2EDuration="1.038358297s" podCreationTimestamp="2025-12-16 12:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:32:08.030385368 +0000 UTC m=+1.142209988" watchObservedRunningTime="2025-12-16 12:32:08.038358297 +0000 UTC m=+1.150182877" Dec 16 12:32:08.047403 kubelet[2618]: I1216 12:32:08.047229 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.047214946 podStartE2EDuration="1.047214946s" podCreationTimestamp="2025-12-16 12:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:32:08.039061865 +0000 UTC m=+1.150886485" watchObservedRunningTime="2025-12-16 12:32:08.047214946 +0000 UTC m=+1.159039606" Dec 16 12:32:08.154619 sudo[1674]: pam_unix(sudo:session): session closed for user root Dec 16 12:32:08.156208 sshd[1673]: Connection closed by 10.0.0.1 port 43948 Dec 16 12:32:08.156440 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:08.162621 systemd[1]: sshd@4-10.0.0.67:22-10.0.0.1:43948.service: Deactivated successfully. Dec 16 12:32:08.165838 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:32:08.166845 systemd[1]: session-5.scope: Consumed 6.384s CPU time, 219.5M memory peak. Dec 16 12:32:08.168972 systemd-logind[1485]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:32:08.169917 systemd-logind[1485]: Removed session 5. Dec 16 12:32:11.342897 kubelet[2618]: I1216 12:32:11.342838 2618 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:32:11.343317 containerd[1503]: time="2025-12-16T12:32:11.343286450Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:32:11.343514 kubelet[2618]: I1216 12:32:11.343486 2618 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:32:12.381291 systemd[1]: Created slice kubepods-besteffort-pod75bf3fa3_beb0_4b1c_9895_119856d0c2fa.slice - libcontainer container kubepods-besteffort-pod75bf3fa3_beb0_4b1c_9895_119856d0c2fa.slice. Dec 16 12:32:12.400632 systemd[1]: Created slice kubepods-burstable-pod4dea6c85_edf2_425a_a2f1_9199b8000106.slice - libcontainer container kubepods-burstable-pod4dea6c85_edf2_425a_a2f1_9199b8000106.slice. Dec 16 12:32:12.406483 kubelet[2618]: I1216 12:32:12.406296 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8fgw\" (UniqueName: \"kubernetes.io/projected/75bf3fa3-beb0-4b1c-9895-119856d0c2fa-kube-api-access-v8fgw\") pod \"kube-proxy-gtpjd\" (UID: \"75bf3fa3-beb0-4b1c-9895-119856d0c2fa\") " pod="kube-system/kube-proxy-gtpjd" Dec 16 12:32:12.406483 kubelet[2618]: I1216 12:32:12.406479 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/4dea6c85-edf2-425a-a2f1-9199b8000106-cni-plugin\") pod \"kube-flannel-ds-g4jvt\" (UID: \"4dea6c85-edf2-425a-a2f1-9199b8000106\") " pod="kube-flannel/kube-flannel-ds-g4jvt" Dec 16 12:32:12.406993 kubelet[2618]: I1216 12:32:12.406532 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4dea6c85-edf2-425a-a2f1-9199b8000106-run\") pod \"kube-flannel-ds-g4jvt\" (UID: \"4dea6c85-edf2-425a-a2f1-9199b8000106\") " pod="kube-flannel/kube-flannel-ds-g4jvt" Dec 16 12:32:12.406993 kubelet[2618]: I1216 12:32:12.406557 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/4dea6c85-edf2-425a-a2f1-9199b8000106-cni\") pod \"kube-flannel-ds-g4jvt\" (UID: \"4dea6c85-edf2-425a-a2f1-9199b8000106\") " pod="kube-flannel/kube-flannel-ds-g4jvt" Dec 16 12:32:12.406993 kubelet[2618]: I1216 12:32:12.406571 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/4dea6c85-edf2-425a-a2f1-9199b8000106-flannel-cfg\") pod \"kube-flannel-ds-g4jvt\" (UID: \"4dea6c85-edf2-425a-a2f1-9199b8000106\") " pod="kube-flannel/kube-flannel-ds-g4jvt" Dec 16 12:32:12.406993 kubelet[2618]: I1216 12:32:12.406758 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/75bf3fa3-beb0-4b1c-9895-119856d0c2fa-kube-proxy\") pod \"kube-proxy-gtpjd\" (UID: \"75bf3fa3-beb0-4b1c-9895-119856d0c2fa\") " pod="kube-system/kube-proxy-gtpjd" Dec 16 12:32:12.406993 kubelet[2618]: I1216 12:32:12.406827 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c5jw\" (UniqueName: \"kubernetes.io/projected/4dea6c85-edf2-425a-a2f1-9199b8000106-kube-api-access-8c5jw\") pod \"kube-flannel-ds-g4jvt\" (UID: \"4dea6c85-edf2-425a-a2f1-9199b8000106\") " pod="kube-flannel/kube-flannel-ds-g4jvt" Dec 16 12:32:12.407108 kubelet[2618]: I1216 12:32:12.406858 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75bf3fa3-beb0-4b1c-9895-119856d0c2fa-xtables-lock\") pod \"kube-proxy-gtpjd\" (UID: \"75bf3fa3-beb0-4b1c-9895-119856d0c2fa\") " pod="kube-system/kube-proxy-gtpjd" Dec 16 12:32:12.407108 kubelet[2618]: I1216 12:32:12.406938 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75bf3fa3-beb0-4b1c-9895-119856d0c2fa-lib-modules\") pod \"kube-proxy-gtpjd\" (UID: \"75bf3fa3-beb0-4b1c-9895-119856d0c2fa\") " pod="kube-system/kube-proxy-gtpjd" Dec 16 12:32:12.407108 kubelet[2618]: I1216 12:32:12.406997 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4dea6c85-edf2-425a-a2f1-9199b8000106-xtables-lock\") pod \"kube-flannel-ds-g4jvt\" (UID: \"4dea6c85-edf2-425a-a2f1-9199b8000106\") " pod="kube-flannel/kube-flannel-ds-g4jvt" Dec 16 12:32:12.721716 containerd[1503]: time="2025-12-16T12:32:12.721594773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gtpjd,Uid:75bf3fa3-beb0-4b1c-9895-119856d0c2fa,Namespace:kube-system,Attempt:0,}" Dec 16 12:32:12.733542 containerd[1503]: time="2025-12-16T12:32:12.733494418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-g4jvt,Uid:4dea6c85-edf2-425a-a2f1-9199b8000106,Namespace:kube-flannel,Attempt:0,}" Dec 16 12:32:12.761630 containerd[1503]: time="2025-12-16T12:32:12.761263724Z" level=info msg="connecting to shim b1643300b4234b05503dcff20066d90984a85f9947ae0286b55ac96cbcf83d0e" address="unix:///run/containerd/s/8037110aa81c412196f7db4efdc8138a5cadd38efdb0c876334da72e5bce1935" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:32:12.773733 containerd[1503]: time="2025-12-16T12:32:12.773683484Z" level=info msg="connecting to shim 8bd213d82b833baf4795c84c60dfd24f5b2ebc49f62783cbc5dd0b7876ef7c70" address="unix:///run/containerd/s/a8372971b38ec8c14cdc37dfd52b518a1234e5d33db455cbc590b54dd778081f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:32:12.802484 systemd[1]: Started cri-containerd-b1643300b4234b05503dcff20066d90984a85f9947ae0286b55ac96cbcf83d0e.scope - libcontainer container b1643300b4234b05503dcff20066d90984a85f9947ae0286b55ac96cbcf83d0e. Dec 16 12:32:12.808557 systemd[1]: Started cri-containerd-8bd213d82b833baf4795c84c60dfd24f5b2ebc49f62783cbc5dd0b7876ef7c70.scope - libcontainer container 8bd213d82b833baf4795c84c60dfd24f5b2ebc49f62783cbc5dd0b7876ef7c70. Dec 16 12:32:12.837366 containerd[1503]: time="2025-12-16T12:32:12.837302307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gtpjd,Uid:75bf3fa3-beb0-4b1c-9895-119856d0c2fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1643300b4234b05503dcff20066d90984a85f9947ae0286b55ac96cbcf83d0e\"" Dec 16 12:32:12.845648 containerd[1503]: time="2025-12-16T12:32:12.845605031Z" level=info msg="CreateContainer within sandbox \"b1643300b4234b05503dcff20066d90984a85f9947ae0286b55ac96cbcf83d0e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:32:12.852956 containerd[1503]: time="2025-12-16T12:32:12.852888967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-g4jvt,Uid:4dea6c85-edf2-425a-a2f1-9199b8000106,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"8bd213d82b833baf4795c84c60dfd24f5b2ebc49f62783cbc5dd0b7876ef7c70\"" Dec 16 12:32:12.856281 containerd[1503]: time="2025-12-16T12:32:12.856238613Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Dec 16 12:32:12.861780 containerd[1503]: time="2025-12-16T12:32:12.861716367Z" level=info msg="Container bd2fdb5ef5a6d03dbcd1531a1a238d4a8b20c0a54222849a0a7f4318daf649d6: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:32:12.869587 containerd[1503]: time="2025-12-16T12:32:12.869533580Z" level=info msg="CreateContainer within sandbox \"b1643300b4234b05503dcff20066d90984a85f9947ae0286b55ac96cbcf83d0e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bd2fdb5ef5a6d03dbcd1531a1a238d4a8b20c0a54222849a0a7f4318daf649d6\"" Dec 16 12:32:12.870630 containerd[1503]: time="2025-12-16T12:32:12.870599415Z" level=info msg="StartContainer for \"bd2fdb5ef5a6d03dbcd1531a1a238d4a8b20c0a54222849a0a7f4318daf649d6\"" Dec 16 12:32:12.872193 containerd[1503]: time="2025-12-16T12:32:12.872150639Z" level=info msg="connecting to shim bd2fdb5ef5a6d03dbcd1531a1a238d4a8b20c0a54222849a0a7f4318daf649d6" address="unix:///run/containerd/s/8037110aa81c412196f7db4efdc8138a5cadd38efdb0c876334da72e5bce1935" protocol=ttrpc version=3 Dec 16 12:32:12.894547 systemd[1]: Started cri-containerd-bd2fdb5ef5a6d03dbcd1531a1a238d4a8b20c0a54222849a0a7f4318daf649d6.scope - libcontainer container bd2fdb5ef5a6d03dbcd1531a1a238d4a8b20c0a54222849a0a7f4318daf649d6. Dec 16 12:32:12.980727 containerd[1503]: time="2025-12-16T12:32:12.980609763Z" level=info msg="StartContainer for \"bd2fdb5ef5a6d03dbcd1531a1a238d4a8b20c0a54222849a0a7f4318daf649d6\" returns successfully" Dec 16 12:32:13.023601 kubelet[2618]: I1216 12:32:13.023530 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gtpjd" podStartSLOduration=1.023513249 podStartE2EDuration="1.023513249s" podCreationTimestamp="2025-12-16 12:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:32:13.023167242 +0000 UTC m=+6.134991862" watchObservedRunningTime="2025-12-16 12:32:13.023513249 +0000 UTC m=+6.135337909" Dec 16 12:32:13.917608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843113108.mount: Deactivated successfully. Dec 16 12:32:13.948083 containerd[1503]: time="2025-12-16T12:32:13.948029151Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:32:13.948630 containerd[1503]: time="2025-12-16T12:32:13.948587547Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=5125564" Dec 16 12:32:13.949640 containerd[1503]: time="2025-12-16T12:32:13.949608407Z" level=info msg="ImageCreate event name:\"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:32:13.952753 containerd[1503]: time="2025-12-16T12:32:13.952130713Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:32:13.953361 containerd[1503]: time="2025-12-16T12:32:13.953335359Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"5125394\" in 1.097046738s" Dec 16 12:32:13.953443 containerd[1503]: time="2025-12-16T12:32:13.953427051Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:bf6e087b7c89143a757bb62f368860d2454e71afe59ae44ecb1ab473fd00b759\"" Dec 16 12:32:13.958192 containerd[1503]: time="2025-12-16T12:32:13.958158980Z" level=info msg="CreateContainer within sandbox \"8bd213d82b833baf4795c84c60dfd24f5b2ebc49f62783cbc5dd0b7876ef7c70\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 16 12:32:13.968059 containerd[1503]: time="2025-12-16T12:32:13.967530186Z" level=info msg="Container 107a95ec5e412d95e093f691b2b9becf58e5712db105350cf9e3972faac888c3: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:32:13.970442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2973242681.mount: Deactivated successfully. Dec 16 12:32:13.972971 containerd[1503]: time="2025-12-16T12:32:13.972921045Z" level=info msg="CreateContainer within sandbox \"8bd213d82b833baf4795c84c60dfd24f5b2ebc49f62783cbc5dd0b7876ef7c70\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"107a95ec5e412d95e093f691b2b9becf58e5712db105350cf9e3972faac888c3\"" Dec 16 12:32:13.974265 containerd[1503]: time="2025-12-16T12:32:13.973413393Z" level=info msg="StartContainer for \"107a95ec5e412d95e093f691b2b9becf58e5712db105350cf9e3972faac888c3\"" Dec 16 12:32:13.974265 containerd[1503]: time="2025-12-16T12:32:13.974165376Z" level=info msg="connecting to shim 107a95ec5e412d95e093f691b2b9becf58e5712db105350cf9e3972faac888c3" address="unix:///run/containerd/s/a8372971b38ec8c14cdc37dfd52b518a1234e5d33db455cbc590b54dd778081f" protocol=ttrpc version=3 Dec 16 12:32:13.997479 systemd[1]: Started cri-containerd-107a95ec5e412d95e093f691b2b9becf58e5712db105350cf9e3972faac888c3.scope - libcontainer container 107a95ec5e412d95e093f691b2b9becf58e5712db105350cf9e3972faac888c3. Dec 16 12:32:14.025185 containerd[1503]: time="2025-12-16T12:32:14.025129753Z" level=info msg="StartContainer for \"107a95ec5e412d95e093f691b2b9becf58e5712db105350cf9e3972faac888c3\" returns successfully" Dec 16 12:32:14.027144 systemd[1]: cri-containerd-107a95ec5e412d95e093f691b2b9becf58e5712db105350cf9e3972faac888c3.scope: Deactivated successfully. Dec 16 12:32:14.028536 containerd[1503]: time="2025-12-16T12:32:14.028500110Z" level=info msg="received container exit event container_id:\"107a95ec5e412d95e093f691b2b9becf58e5712db105350cf9e3972faac888c3\" id:\"107a95ec5e412d95e093f691b2b9becf58e5712db105350cf9e3972faac888c3\" pid:2969 exited_at:{seconds:1765888334 nanos:28163187}" Dec 16 12:32:15.021715 containerd[1503]: time="2025-12-16T12:32:15.020726593Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Dec 16 12:32:16.622067 containerd[1503]: time="2025-12-16T12:32:16.622014461Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:32:16.623138 containerd[1503]: time="2025-12-16T12:32:16.622776150Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=28419854" Dec 16 12:32:16.624011 containerd[1503]: time="2025-12-16T12:32:16.623968809Z" level=info msg="ImageCreate event name:\"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:32:16.627443 containerd[1503]: time="2025-12-16T12:32:16.627407010Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:32:16.628828 containerd[1503]: time="2025-12-16T12:32:16.628795572Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32412118\" in 1.608023294s" Dec 16 12:32:16.628935 containerd[1503]: time="2025-12-16T12:32:16.628920706Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:253e2cac1f011511dce473642669aa3b75987d78cb108ecc51c8c2fa69f3e587\"" Dec 16 12:32:16.638205 containerd[1503]: time="2025-12-16T12:32:16.638156423Z" level=info msg="CreateContainer within sandbox \"8bd213d82b833baf4795c84c60dfd24f5b2ebc49f62783cbc5dd0b7876ef7c70\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:32:16.646614 containerd[1503]: time="2025-12-16T12:32:16.645975175Z" level=info msg="Container 4a6109a466d053bbd6c002c5c0f7308d283b16b84f5abf61bb08b1fde0809062: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:32:16.653222 containerd[1503]: time="2025-12-16T12:32:16.653034358Z" level=info msg="CreateContainer within sandbox \"8bd213d82b833baf4795c84c60dfd24f5b2ebc49f62783cbc5dd0b7876ef7c70\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4a6109a466d053bbd6c002c5c0f7308d283b16b84f5abf61bb08b1fde0809062\"" Dec 16 12:32:16.654886 containerd[1503]: time="2025-12-16T12:32:16.653733079Z" level=info msg="StartContainer for \"4a6109a466d053bbd6c002c5c0f7308d283b16b84f5abf61bb08b1fde0809062\"" Dec 16 12:32:16.654886 containerd[1503]: time="2025-12-16T12:32:16.654581378Z" level=info msg="connecting to shim 4a6109a466d053bbd6c002c5c0f7308d283b16b84f5abf61bb08b1fde0809062" address="unix:///run/containerd/s/a8372971b38ec8c14cdc37dfd52b518a1234e5d33db455cbc590b54dd778081f" protocol=ttrpc version=3 Dec 16 12:32:16.674435 systemd[1]: Started cri-containerd-4a6109a466d053bbd6c002c5c0f7308d283b16b84f5abf61bb08b1fde0809062.scope - libcontainer container 4a6109a466d053bbd6c002c5c0f7308d283b16b84f5abf61bb08b1fde0809062. Dec 16 12:32:16.699412 systemd[1]: cri-containerd-4a6109a466d053bbd6c002c5c0f7308d283b16b84f5abf61bb08b1fde0809062.scope: Deactivated successfully. Dec 16 12:32:16.702100 containerd[1503]: time="2025-12-16T12:32:16.702056753Z" level=info msg="received container exit event container_id:\"4a6109a466d053bbd6c002c5c0f7308d283b16b84f5abf61bb08b1fde0809062\" id:\"4a6109a466d053bbd6c002c5c0f7308d283b16b84f5abf61bb08b1fde0809062\" pid:3044 exited_at:{seconds:1765888336 nanos:699692757}" Dec 16 12:32:16.710396 containerd[1503]: time="2025-12-16T12:32:16.710357681Z" level=info msg="StartContainer for \"4a6109a466d053bbd6c002c5c0f7308d283b16b84f5abf61bb08b1fde0809062\" returns successfully" Dec 16 12:32:16.726318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a6109a466d053bbd6c002c5c0f7308d283b16b84f5abf61bb08b1fde0809062-rootfs.mount: Deactivated successfully. Dec 16 12:32:16.795260 kubelet[2618]: I1216 12:32:16.794516 2618 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 12:32:16.854557 systemd[1]: Created slice kubepods-burstable-pod327e2140_122f_4ecf_8711_c2657b79a4a5.slice - libcontainer container kubepods-burstable-pod327e2140_122f_4ecf_8711_c2657b79a4a5.slice. Dec 16 12:32:16.861358 systemd[1]: Created slice kubepods-burstable-pod6e37a289_57f7_466c_bdab_d99dfb35d99e.slice - libcontainer container kubepods-burstable-pod6e37a289_57f7_466c_bdab_d99dfb35d99e.slice. Dec 16 12:32:16.940005 kubelet[2618]: I1216 12:32:16.939866 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/327e2140-122f-4ecf-8711-c2657b79a4a5-config-volume\") pod \"coredns-66bc5c9577-qj2hd\" (UID: \"327e2140-122f-4ecf-8711-c2657b79a4a5\") " pod="kube-system/coredns-66bc5c9577-qj2hd" Dec 16 12:32:16.940005 kubelet[2618]: I1216 12:32:16.939916 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2dv2\" (UniqueName: \"kubernetes.io/projected/327e2140-122f-4ecf-8711-c2657b79a4a5-kube-api-access-b2dv2\") pod \"coredns-66bc5c9577-qj2hd\" (UID: \"327e2140-122f-4ecf-8711-c2657b79a4a5\") " pod="kube-system/coredns-66bc5c9577-qj2hd" Dec 16 12:32:16.940005 kubelet[2618]: I1216 12:32:16.939970 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e37a289-57f7-466c-bdab-d99dfb35d99e-config-volume\") pod \"coredns-66bc5c9577-487ch\" (UID: \"6e37a289-57f7-466c-bdab-d99dfb35d99e\") " pod="kube-system/coredns-66bc5c9577-487ch" Dec 16 12:32:16.940005 kubelet[2618]: I1216 12:32:16.940005 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj9ls\" (UniqueName: \"kubernetes.io/projected/6e37a289-57f7-466c-bdab-d99dfb35d99e-kube-api-access-sj9ls\") pod \"coredns-66bc5c9577-487ch\" (UID: \"6e37a289-57f7-466c-bdab-d99dfb35d99e\") " pod="kube-system/coredns-66bc5c9577-487ch" Dec 16 12:32:17.036588 containerd[1503]: time="2025-12-16T12:32:17.036545937Z" level=info msg="CreateContainer within sandbox \"8bd213d82b833baf4795c84c60dfd24f5b2ebc49f62783cbc5dd0b7876ef7c70\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 16 12:32:17.049240 containerd[1503]: time="2025-12-16T12:32:17.048576707Z" level=info msg="Container 8b2311dd605a69ed3619ab22532b78abcbe47b5b9bb6133055d9e75305dc1da0: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:32:17.058825 containerd[1503]: time="2025-12-16T12:32:17.058759073Z" level=info msg="CreateContainer within sandbox \"8bd213d82b833baf4795c84c60dfd24f5b2ebc49f62783cbc5dd0b7876ef7c70\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"8b2311dd605a69ed3619ab22532b78abcbe47b5b9bb6133055d9e75305dc1da0\"" Dec 16 12:32:17.059829 containerd[1503]: time="2025-12-16T12:32:17.059790027Z" level=info msg="StartContainer for \"8b2311dd605a69ed3619ab22532b78abcbe47b5b9bb6133055d9e75305dc1da0\"" Dec 16 12:32:17.061983 containerd[1503]: time="2025-12-16T12:32:17.061949386Z" level=info msg="connecting to shim 8b2311dd605a69ed3619ab22532b78abcbe47b5b9bb6133055d9e75305dc1da0" address="unix:///run/containerd/s/a8372971b38ec8c14cdc37dfd52b518a1234e5d33db455cbc590b54dd778081f" protocol=ttrpc version=3 Dec 16 12:32:17.101428 systemd[1]: Started cri-containerd-8b2311dd605a69ed3619ab22532b78abcbe47b5b9bb6133055d9e75305dc1da0.scope - libcontainer container 8b2311dd605a69ed3619ab22532b78abcbe47b5b9bb6133055d9e75305dc1da0. Dec 16 12:32:17.133556 containerd[1503]: time="2025-12-16T12:32:17.133519659Z" level=info msg="StartContainer for \"8b2311dd605a69ed3619ab22532b78abcbe47b5b9bb6133055d9e75305dc1da0\" returns successfully" Dec 16 12:32:17.163747 containerd[1503]: time="2025-12-16T12:32:17.163679793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qj2hd,Uid:327e2140-122f-4ecf-8711-c2657b79a4a5,Namespace:kube-system,Attempt:0,}" Dec 16 12:32:17.165573 containerd[1503]: time="2025-12-16T12:32:17.165490794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-487ch,Uid:6e37a289-57f7-466c-bdab-d99dfb35d99e,Namespace:kube-system,Attempt:0,}" Dec 16 12:32:17.195732 containerd[1503]: time="2025-12-16T12:32:17.195450106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qj2hd,Uid:327e2140-122f-4ecf-8711-c2657b79a4a5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b080131948f573f6aaca8e5165591a71e2a9bd31ee41445aab315754c44609a4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 12:32:17.198294 kubelet[2618]: E1216 12:32:17.198189 2618 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b080131948f573f6aaca8e5165591a71e2a9bd31ee41445aab315754c44609a4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 12:32:17.198294 kubelet[2618]: E1216 12:32:17.198296 2618 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b080131948f573f6aaca8e5165591a71e2a9bd31ee41445aab315754c44609a4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-qj2hd" Dec 16 12:32:17.198446 kubelet[2618]: E1216 12:32:17.198323 2618 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b080131948f573f6aaca8e5165591a71e2a9bd31ee41445aab315754c44609a4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-qj2hd" Dec 16 12:32:17.198469 containerd[1503]: time="2025-12-16T12:32:17.198306302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-487ch,Uid:6e37a289-57f7-466c-bdab-d99dfb35d99e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2699187afafe4ea6ec7d76a34a85e1617fbde05e33e2fef12ae556ecb8f43f47\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 12:32:17.198560 kubelet[2618]: E1216 12:32:17.198496 2618 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2699187afafe4ea6ec7d76a34a85e1617fbde05e33e2fef12ae556ecb8f43f47\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 12:32:17.198560 kubelet[2618]: E1216 12:32:17.198543 2618 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2699187afafe4ea6ec7d76a34a85e1617fbde05e33e2fef12ae556ecb8f43f47\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-487ch" Dec 16 12:32:17.198560 kubelet[2618]: E1216 12:32:17.198559 2618 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2699187afafe4ea6ec7d76a34a85e1617fbde05e33e2fef12ae556ecb8f43f47\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-487ch" Dec 16 12:32:17.200524 kubelet[2618]: E1216 12:32:17.200474 2618 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-487ch_kube-system(6e37a289-57f7-466c-bdab-d99dfb35d99e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-487ch_kube-system(6e37a289-57f7-466c-bdab-d99dfb35d99e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2699187afafe4ea6ec7d76a34a85e1617fbde05e33e2fef12ae556ecb8f43f47\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-487ch" podUID="6e37a289-57f7-466c-bdab-d99dfb35d99e" Dec 16 12:32:17.202111 kubelet[2618]: E1216 12:32:17.202058 2618 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-qj2hd_kube-system(327e2140-122f-4ecf-8711-c2657b79a4a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-qj2hd_kube-system(327e2140-122f-4ecf-8711-c2657b79a4a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b080131948f573f6aaca8e5165591a71e2a9bd31ee41445aab315754c44609a4\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-qj2hd" podUID="327e2140-122f-4ecf-8711-c2657b79a4a5" Dec 16 12:32:18.214716 systemd-networkd[1437]: flannel.1: Link UP Dec 16 12:32:18.214726 systemd-networkd[1437]: flannel.1: Gained carrier Dec 16 12:32:19.031911 kubelet[2618]: I1216 12:32:19.031848 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-g4jvt" podStartSLOduration=3.257713991 podStartE2EDuration="7.031832511s" podCreationTimestamp="2025-12-16 12:32:12 +0000 UTC" firstStartedPulling="2025-12-16 12:32:12.855562595 +0000 UTC m=+5.967387215" lastFinishedPulling="2025-12-16 12:32:16.629681155 +0000 UTC m=+9.741505735" observedRunningTime="2025-12-16 12:32:18.049754204 +0000 UTC m=+11.161578824" watchObservedRunningTime="2025-12-16 12:32:19.031832511 +0000 UTC m=+12.143657131" Dec 16 12:32:19.592807 systemd-networkd[1437]: flannel.1: Gained IPv6LL Dec 16 12:32:21.469924 update_engine[1492]: I20251216 12:32:21.469305 1492 update_attempter.cc:509] Updating boot flags... Dec 16 12:32:27.991784 containerd[1503]: time="2025-12-16T12:32:27.991441521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qj2hd,Uid:327e2140-122f-4ecf-8711-c2657b79a4a5,Namespace:kube-system,Attempt:0,}" Dec 16 12:32:28.017397 systemd-networkd[1437]: cni0: Link UP Dec 16 12:32:28.017401 systemd-networkd[1437]: cni0: Gained carrier Dec 16 12:32:28.024468 systemd-networkd[1437]: cni0: Lost carrier Dec 16 12:32:28.030953 systemd-networkd[1437]: vethd06dc98d: Link UP Dec 16 12:32:28.032792 kernel: cni0: port 1(vethd06dc98d) entered blocking state Dec 16 12:32:28.032880 kernel: cni0: port 1(vethd06dc98d) entered disabled state Dec 16 12:32:28.033588 kernel: vethd06dc98d: entered allmulticast mode Dec 16 12:32:28.034355 kernel: vethd06dc98d: entered promiscuous mode Dec 16 12:32:28.045866 kernel: cni0: port 1(vethd06dc98d) entered blocking state Dec 16 12:32:28.045959 kernel: cni0: port 1(vethd06dc98d) entered forwarding state Dec 16 12:32:28.046110 systemd-networkd[1437]: vethd06dc98d: Gained carrier Dec 16 12:32:28.046462 systemd-networkd[1437]: cni0: Gained carrier Dec 16 12:32:28.050223 containerd[1503]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001867f0), "name":"cbr0", "type":"bridge"} Dec 16 12:32:28.050223 containerd[1503]: delegateAdd: netconf sent to delegate plugin: Dec 16 12:32:28.095108 containerd[1503]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-12-16T12:32:28.095060007Z" level=info msg="connecting to shim 208dc2c8260972075e50be63cc3e40c8d7406039bb30a82dc91fb9eb334eb582" address="unix:///run/containerd/s/03836bb718c2047760f3a54572a1f77d0c0f5494838575bbd50e235bffaaf491" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:32:28.123454 systemd[1]: Started cri-containerd-208dc2c8260972075e50be63cc3e40c8d7406039bb30a82dc91fb9eb334eb582.scope - libcontainer container 208dc2c8260972075e50be63cc3e40c8d7406039bb30a82dc91fb9eb334eb582. Dec 16 12:32:28.136763 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:32:28.159615 containerd[1503]: time="2025-12-16T12:32:28.159559896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qj2hd,Uid:327e2140-122f-4ecf-8711-c2657b79a4a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"208dc2c8260972075e50be63cc3e40c8d7406039bb30a82dc91fb9eb334eb582\"" Dec 16 12:32:28.165985 containerd[1503]: time="2025-12-16T12:32:28.165893865Z" level=info msg="CreateContainer within sandbox \"208dc2c8260972075e50be63cc3e40c8d7406039bb30a82dc91fb9eb334eb582\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:32:28.176016 containerd[1503]: time="2025-12-16T12:32:28.175962156Z" level=info msg="Container 6ad054200c4bb16fbe5f9a25ef1e4c30a8f8ee652125d48a56148d41fbb48ed3: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:32:28.183175 containerd[1503]: time="2025-12-16T12:32:28.183114898Z" level=info msg="CreateContainer within sandbox \"208dc2c8260972075e50be63cc3e40c8d7406039bb30a82dc91fb9eb334eb582\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ad054200c4bb16fbe5f9a25ef1e4c30a8f8ee652125d48a56148d41fbb48ed3\"" Dec 16 12:32:28.183868 containerd[1503]: time="2025-12-16T12:32:28.183760620Z" level=info msg="StartContainer for \"6ad054200c4bb16fbe5f9a25ef1e4c30a8f8ee652125d48a56148d41fbb48ed3\"" Dec 16 12:32:28.184698 containerd[1503]: time="2025-12-16T12:32:28.184668358Z" level=info msg="connecting to shim 6ad054200c4bb16fbe5f9a25ef1e4c30a8f8ee652125d48a56148d41fbb48ed3" address="unix:///run/containerd/s/03836bb718c2047760f3a54572a1f77d0c0f5494838575bbd50e235bffaaf491" protocol=ttrpc version=3 Dec 16 12:32:28.216467 systemd[1]: Started cri-containerd-6ad054200c4bb16fbe5f9a25ef1e4c30a8f8ee652125d48a56148d41fbb48ed3.scope - libcontainer container 6ad054200c4bb16fbe5f9a25ef1e4c30a8f8ee652125d48a56148d41fbb48ed3. Dec 16 12:32:28.242986 containerd[1503]: time="2025-12-16T12:32:28.242878760Z" level=info msg="StartContainer for \"6ad054200c4bb16fbe5f9a25ef1e4c30a8f8ee652125d48a56148d41fbb48ed3\" returns successfully" Dec 16 12:32:29.079268 kubelet[2618]: I1216 12:32:29.079191 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qj2hd" podStartSLOduration=17.079174153 podStartE2EDuration="17.079174153s" podCreationTimestamp="2025-12-16 12:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:32:29.078396665 +0000 UTC m=+22.190221245" watchObservedRunningTime="2025-12-16 12:32:29.079174153 +0000 UTC m=+22.190998773" Dec 16 12:32:29.832487 systemd-networkd[1437]: cni0: Gained IPv6LL Dec 16 12:32:30.088456 systemd-networkd[1437]: vethd06dc98d: Gained IPv6LL Dec 16 12:32:30.992612 containerd[1503]: time="2025-12-16T12:32:30.992574919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-487ch,Uid:6e37a289-57f7-466c-bdab-d99dfb35d99e,Namespace:kube-system,Attempt:0,}" Dec 16 12:32:31.006238 systemd-networkd[1437]: veth93ce89a3: Link UP Dec 16 12:32:31.008683 kernel: cni0: port 2(veth93ce89a3) entered blocking state Dec 16 12:32:31.008775 kernel: cni0: port 2(veth93ce89a3) entered disabled state Dec 16 12:32:31.008790 kernel: veth93ce89a3: entered allmulticast mode Dec 16 12:32:31.009875 kernel: veth93ce89a3: entered promiscuous mode Dec 16 12:32:31.018502 kernel: cni0: port 2(veth93ce89a3) entered blocking state Dec 16 12:32:31.018591 kernel: cni0: port 2(veth93ce89a3) entered forwarding state Dec 16 12:32:31.018560 systemd-networkd[1437]: veth93ce89a3: Gained carrier Dec 16 12:32:31.020587 containerd[1503]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001867f0), "name":"cbr0", "type":"bridge"} Dec 16 12:32:31.020587 containerd[1503]: delegateAdd: netconf sent to delegate plugin: Dec 16 12:32:31.049158 containerd[1503]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-12-16T12:32:31.049113792Z" level=info msg="connecting to shim e8a4dc881c3a9a1744dc5d65ecf65cc9d593f1157d1bc415c1419386fb6b8d06" address="unix:///run/containerd/s/5f42313840de6af86f34ea7582200fc8dccc190c12edede78734a946175ad494" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:32:31.091486 systemd[1]: Started cri-containerd-e8a4dc881c3a9a1744dc5d65ecf65cc9d593f1157d1bc415c1419386fb6b8d06.scope - libcontainer container e8a4dc881c3a9a1744dc5d65ecf65cc9d593f1157d1bc415c1419386fb6b8d06. Dec 16 12:32:31.106017 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:32:31.130480 containerd[1503]: time="2025-12-16T12:32:31.130437612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-487ch,Uid:6e37a289-57f7-466c-bdab-d99dfb35d99e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8a4dc881c3a9a1744dc5d65ecf65cc9d593f1157d1bc415c1419386fb6b8d06\"" Dec 16 12:32:31.135080 containerd[1503]: time="2025-12-16T12:32:31.135038033Z" level=info msg="CreateContainer within sandbox \"e8a4dc881c3a9a1744dc5d65ecf65cc9d593f1157d1bc415c1419386fb6b8d06\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:32:31.144476 containerd[1503]: time="2025-12-16T12:32:31.144413125Z" level=info msg="Container 28be050db2f72b57a43c4ae9728a92c5023f9a081cb9fe8c4eda70e0f9505879: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:32:31.150211 containerd[1503]: time="2025-12-16T12:32:31.150159412Z" level=info msg="CreateContainer within sandbox \"e8a4dc881c3a9a1744dc5d65ecf65cc9d593f1157d1bc415c1419386fb6b8d06\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"28be050db2f72b57a43c4ae9728a92c5023f9a081cb9fe8c4eda70e0f9505879\"" Dec 16 12:32:31.151052 containerd[1503]: time="2025-12-16T12:32:31.150857172Z" level=info msg="StartContainer for \"28be050db2f72b57a43c4ae9728a92c5023f9a081cb9fe8c4eda70e0f9505879\"" Dec 16 12:32:31.151899 containerd[1503]: time="2025-12-16T12:32:31.151856868Z" level=info msg="connecting to shim 28be050db2f72b57a43c4ae9728a92c5023f9a081cb9fe8c4eda70e0f9505879" address="unix:///run/containerd/s/5f42313840de6af86f34ea7582200fc8dccc190c12edede78734a946175ad494" protocol=ttrpc version=3 Dec 16 12:32:31.179442 systemd[1]: Started cri-containerd-28be050db2f72b57a43c4ae9728a92c5023f9a081cb9fe8c4eda70e0f9505879.scope - libcontainer container 28be050db2f72b57a43c4ae9728a92c5023f9a081cb9fe8c4eda70e0f9505879. Dec 16 12:32:31.206597 containerd[1503]: time="2025-12-16T12:32:31.206559976Z" level=info msg="StartContainer for \"28be050db2f72b57a43c4ae9728a92c5023f9a081cb9fe8c4eda70e0f9505879\" returns successfully" Dec 16 12:32:32.112905 kubelet[2618]: I1216 12:32:32.112634 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-487ch" podStartSLOduration=20.11261791 podStartE2EDuration="20.11261791s" podCreationTimestamp="2025-12-16 12:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:32:32.094550645 +0000 UTC m=+25.206375265" watchObservedRunningTime="2025-12-16 12:32:32.11261791 +0000 UTC m=+25.224442530" Dec 16 12:32:32.904479 systemd-networkd[1437]: veth93ce89a3: Gained IPv6LL Dec 16 12:32:34.661462 systemd[1]: Started sshd@5-10.0.0.67:22-10.0.0.1:54870.service - OpenSSH per-connection server daemon (10.0.0.1:54870). Dec 16 12:32:34.720599 sshd[3559]: Accepted publickey for core from 10.0.0.1 port 54870 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:32:34.722241 sshd-session[3559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:34.726487 systemd-logind[1485]: New session 6 of user core. Dec 16 12:32:34.744627 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:32:34.875570 sshd[3562]: Connection closed by 10.0.0.1 port 54870 Dec 16 12:32:34.875932 sshd-session[3559]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:34.879612 systemd[1]: sshd@5-10.0.0.67:22-10.0.0.1:54870.service: Deactivated successfully. Dec 16 12:32:34.881771 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:32:34.882738 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:32:34.884075 systemd-logind[1485]: Removed session 6. Dec 16 12:32:39.892981 systemd[1]: Started sshd@6-10.0.0.67:22-10.0.0.1:54872.service - OpenSSH per-connection server daemon (10.0.0.1:54872). Dec 16 12:32:39.955987 sshd[3604]: Accepted publickey for core from 10.0.0.1 port 54872 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:32:39.957890 sshd-session[3604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:39.965439 systemd-logind[1485]: New session 7 of user core. Dec 16 12:32:39.976487 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:32:40.114339 sshd[3607]: Connection closed by 10.0.0.1 port 54872 Dec 16 12:32:40.114984 sshd-session[3604]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:40.118926 systemd[1]: sshd@6-10.0.0.67:22-10.0.0.1:54872.service: Deactivated successfully. Dec 16 12:32:40.120980 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:32:40.122871 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:32:40.124069 systemd-logind[1485]: Removed session 7. Dec 16 12:32:45.128669 systemd[1]: Started sshd@7-10.0.0.67:22-10.0.0.1:60032.service - OpenSSH per-connection server daemon (10.0.0.1:60032). Dec 16 12:32:45.194917 sshd[3645]: Accepted publickey for core from 10.0.0.1 port 60032 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:32:45.196284 sshd-session[3645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:45.201212 systemd-logind[1485]: New session 8 of user core. Dec 16 12:32:45.210485 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:32:45.337312 sshd[3648]: Connection closed by 10.0.0.1 port 60032 Dec 16 12:32:45.337610 sshd-session[3645]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:45.345583 systemd[1]: sshd@7-10.0.0.67:22-10.0.0.1:60032.service: Deactivated successfully. Dec 16 12:32:45.347791 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:32:45.348817 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:32:45.351993 systemd[1]: Started sshd@8-10.0.0.67:22-10.0.0.1:60040.service - OpenSSH per-connection server daemon (10.0.0.1:60040). Dec 16 12:32:45.352639 systemd-logind[1485]: Removed session 8. Dec 16 12:32:45.412724 sshd[3662]: Accepted publickey for core from 10.0.0.1 port 60040 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:32:45.414055 sshd-session[3662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:45.419102 systemd-logind[1485]: New session 9 of user core. Dec 16 12:32:45.425465 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:32:45.593674 sshd[3665]: Connection closed by 10.0.0.1 port 60040 Dec 16 12:32:45.594096 sshd-session[3662]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:45.603433 systemd[1]: sshd@8-10.0.0.67:22-10.0.0.1:60040.service: Deactivated successfully. Dec 16 12:32:45.608122 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:32:45.610522 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:32:45.615564 systemd[1]: Started sshd@9-10.0.0.67:22-10.0.0.1:60048.service - OpenSSH per-connection server daemon (10.0.0.1:60048). Dec 16 12:32:45.618570 systemd-logind[1485]: Removed session 9. Dec 16 12:32:45.680468 sshd[3676]: Accepted publickey for core from 10.0.0.1 port 60048 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:32:45.681763 sshd-session[3676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:45.686373 systemd-logind[1485]: New session 10 of user core. Dec 16 12:32:45.692461 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:32:45.811574 sshd[3679]: Connection closed by 10.0.0.1 port 60048 Dec 16 12:32:45.812305 sshd-session[3676]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:45.816566 systemd[1]: sshd@9-10.0.0.67:22-10.0.0.1:60048.service: Deactivated successfully. Dec 16 12:32:45.818216 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:32:45.820543 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:32:45.823659 systemd-logind[1485]: Removed session 10. Dec 16 12:32:50.828691 systemd[1]: Started sshd@10-10.0.0.67:22-10.0.0.1:60052.service - OpenSSH per-connection server daemon (10.0.0.1:60052). Dec 16 12:32:50.879097 sshd[3712]: Accepted publickey for core from 10.0.0.1 port 60052 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:32:50.880389 sshd-session[3712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:50.884132 systemd-logind[1485]: New session 11 of user core. Dec 16 12:32:50.894451 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:32:51.025094 sshd[3715]: Connection closed by 10.0.0.1 port 60052 Dec 16 12:32:51.025626 sshd-session[3712]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:51.037594 systemd[1]: sshd@10-10.0.0.67:22-10.0.0.1:60052.service: Deactivated successfully. Dec 16 12:32:51.039448 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:32:51.040381 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:32:51.042619 systemd[1]: Started sshd@11-10.0.0.67:22-10.0.0.1:39422.service - OpenSSH per-connection server daemon (10.0.0.1:39422). Dec 16 12:32:51.043696 systemd-logind[1485]: Removed session 11. Dec 16 12:32:51.094696 sshd[3729]: Accepted publickey for core from 10.0.0.1 port 39422 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:32:51.096449 sshd-session[3729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:51.101163 systemd-logind[1485]: New session 12 of user core. Dec 16 12:32:51.113494 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:32:51.290804 sshd[3732]: Connection closed by 10.0.0.1 port 39422 Dec 16 12:32:51.291355 sshd-session[3729]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:51.299378 systemd[1]: sshd@11-10.0.0.67:22-10.0.0.1:39422.service: Deactivated successfully. Dec 16 12:32:51.301041 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:32:51.302143 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:32:51.303878 systemd[1]: Started sshd@12-10.0.0.67:22-10.0.0.1:39432.service - OpenSSH per-connection server daemon (10.0.0.1:39432). Dec 16 12:32:51.305790 systemd-logind[1485]: Removed session 12. Dec 16 12:32:51.358618 sshd[3744]: Accepted publickey for core from 10.0.0.1 port 39432 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:32:51.359938 sshd-session[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:51.364254 systemd-logind[1485]: New session 13 of user core. Dec 16 12:32:51.377441 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:32:51.978011 sshd[3747]: Connection closed by 10.0.0.1 port 39432 Dec 16 12:32:51.978425 sshd-session[3744]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:51.985338 systemd[1]: sshd@12-10.0.0.67:22-10.0.0.1:39432.service: Deactivated successfully. Dec 16 12:32:51.988497 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:32:51.992069 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:32:51.996999 systemd[1]: Started sshd@13-10.0.0.67:22-10.0.0.1:39446.service - OpenSSH per-connection server daemon (10.0.0.1:39446). Dec 16 12:32:51.998930 systemd-logind[1485]: Removed session 13. Dec 16 12:32:52.052320 sshd[3765]: Accepted publickey for core from 10.0.0.1 port 39446 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:32:52.053746 sshd-session[3765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:52.058412 systemd-logind[1485]: New session 14 of user core. Dec 16 12:32:52.068437 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:32:52.301701 sshd[3768]: Connection closed by 10.0.0.1 port 39446 Dec 16 12:32:52.302394 sshd-session[3765]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:52.312954 systemd[1]: sshd@13-10.0.0.67:22-10.0.0.1:39446.service: Deactivated successfully. Dec 16 12:32:52.314976 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:32:52.317149 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:32:52.319808 systemd[1]: Started sshd@14-10.0.0.67:22-10.0.0.1:39454.service - OpenSSH per-connection server daemon (10.0.0.1:39454). Dec 16 12:32:52.321011 systemd-logind[1485]: Removed session 14. Dec 16 12:32:52.374833 sshd[3779]: Accepted publickey for core from 10.0.0.1 port 39454 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:32:52.376139 sshd-session[3779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:52.380820 systemd-logind[1485]: New session 15 of user core. Dec 16 12:32:52.391431 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:32:52.505428 sshd[3782]: Connection closed by 10.0.0.1 port 39454 Dec 16 12:32:52.505813 sshd-session[3779]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:52.508782 systemd[1]: sshd@14-10.0.0.67:22-10.0.0.1:39454.service: Deactivated successfully. Dec 16 12:32:52.510532 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:32:52.513881 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:32:52.514867 systemd-logind[1485]: Removed session 15. Dec 16 12:32:57.522422 systemd[1]: Started sshd@15-10.0.0.67:22-10.0.0.1:39462.service - OpenSSH per-connection server daemon (10.0.0.1:39462). Dec 16 12:32:57.582406 sshd[3820]: Accepted publickey for core from 10.0.0.1 port 39462 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:32:57.583694 sshd-session[3820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:57.588313 systemd-logind[1485]: New session 16 of user core. Dec 16 12:32:57.600468 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:32:57.730321 sshd[3823]: Connection closed by 10.0.0.1 port 39462 Dec 16 12:32:57.730649 sshd-session[3820]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:57.734507 systemd[1]: sshd@15-10.0.0.67:22-10.0.0.1:39462.service: Deactivated successfully. Dec 16 12:32:57.739131 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:32:57.740520 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:32:57.742742 systemd-logind[1485]: Removed session 16. Dec 16 12:33:02.742372 systemd[1]: Started sshd@16-10.0.0.67:22-10.0.0.1:35052.service - OpenSSH per-connection server daemon (10.0.0.1:35052). Dec 16 12:33:02.802697 sshd[3857]: Accepted publickey for core from 10.0.0.1 port 35052 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:33:02.804164 sshd-session[3857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:02.808055 systemd-logind[1485]: New session 17 of user core. Dec 16 12:33:02.818466 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:33:02.934528 sshd[3860]: Connection closed by 10.0.0.1 port 35052 Dec 16 12:33:02.935053 sshd-session[3857]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:02.938883 systemd[1]: sshd@16-10.0.0.67:22-10.0.0.1:35052.service: Deactivated successfully. Dec 16 12:33:02.942704 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:33:02.943537 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:33:02.944650 systemd-logind[1485]: Removed session 17.