May 13 23:45:31.926709 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 23:45:31.926731 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 22:16:18 -00 2025 May 13 23:45:31.926741 kernel: KASLR enabled May 13 23:45:31.926811 kernel: efi: EFI v2.7 by EDK II May 13 23:45:31.926817 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb4ff018 ACPI 2.0=0xd93ef018 RNG=0xd93efa18 MEMRESERVE=0xd91e1f18 May 13 23:45:31.926823 kernel: random: crng init done May 13 23:45:31.926830 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 13 23:45:31.926836 kernel: secureboot: Secure boot enabled May 13 23:45:31.926841 kernel: ACPI: Early table checksum verification disabled May 13 23:45:31.926847 kernel: ACPI: RSDP 0x00000000D93EF018 000024 (v02 BOCHS ) May 13 23:45:31.926856 kernel: ACPI: XSDT 0x00000000D93EFF18 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 23:45:31.926861 kernel: ACPI: FACP 0x00000000D93EFB18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:45:31.926868 kernel: ACPI: DSDT 0x00000000D93ED018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:45:31.926874 kernel: ACPI: APIC 0x00000000D93EFC98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:45:31.926881 kernel: ACPI: PPTT 0x00000000D93EF098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:45:31.926889 kernel: ACPI: GTDT 0x00000000D93EF818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:45:31.926895 kernel: ACPI: MCFG 0x00000000D93EFA98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:45:31.926902 kernel: ACPI: SPCR 0x00000000D93EF918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:45:31.926908 kernel: ACPI: DBG2 0x00000000D93EF998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:45:31.926914 kernel: ACPI: IORT 0x00000000D93EF198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:45:31.926921 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 23:45:31.926927 kernel: NUMA: Failed to initialise from firmware May 13 23:45:31.926933 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:45:31.926940 kernel: NUMA: NODE_DATA [mem 0xdc729800-0xdc72efff] May 13 23:45:31.926946 kernel: Zone ranges: May 13 23:45:31.926954 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:45:31.926960 kernel: DMA32 empty May 13 23:45:31.926967 kernel: Normal empty May 13 23:45:31.926973 kernel: Movable zone start for each node May 13 23:45:31.926979 kernel: Early memory node ranges May 13 23:45:31.926985 kernel: node 0: [mem 0x0000000040000000-0x00000000d93effff] May 13 23:45:31.926991 kernel: node 0: [mem 0x00000000d93f0000-0x00000000d972ffff] May 13 23:45:31.926997 kernel: node 0: [mem 0x00000000d9730000-0x00000000dcbfffff] May 13 23:45:31.927003 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] May 13 23:45:31.927009 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 23:45:31.927024 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:45:31.927030 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 23:45:31.927039 kernel: psci: probing for conduit method from ACPI. May 13 23:45:31.927045 kernel: psci: PSCIv1.1 detected in firmware. May 13 23:45:31.927051 kernel: psci: Using standard PSCI v0.2 function IDs May 13 23:45:31.927060 kernel: psci: Trusted OS migration not required May 13 23:45:31.927066 kernel: psci: SMC Calling Convention v1.1 May 13 23:45:31.927073 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 23:45:31.927079 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 23:45:31.927088 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 23:45:31.927094 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 23:45:31.927101 kernel: Detected PIPT I-cache on CPU0 May 13 23:45:31.927108 kernel: CPU features: detected: GIC system register CPU interface May 13 23:45:31.927114 kernel: CPU features: detected: Hardware dirty bit management May 13 23:45:31.927121 kernel: CPU features: detected: Spectre-v4 May 13 23:45:31.927127 kernel: CPU features: detected: Spectre-BHB May 13 23:45:31.927134 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 23:45:31.927140 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 23:45:31.927146 kernel: CPU features: detected: ARM erratum 1418040 May 13 23:45:31.927154 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 23:45:31.927161 kernel: alternatives: applying boot alternatives May 13 23:45:31.927168 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:45:31.927175 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:45:31.927181 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:45:31.927188 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:45:31.927194 kernel: Fallback order for Node 0: 0 May 13 23:45:31.927200 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 23:45:31.927207 kernel: Policy zone: DMA May 13 23:45:31.927213 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:45:31.927221 kernel: software IO TLB: area num 4. May 13 23:45:31.927227 kernel: software IO TLB: mapped [mem 0x00000000d2800000-0x00000000d6800000] (64MB) May 13 23:45:31.927234 kernel: Memory: 2385752K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 186536K reserved, 0K cma-reserved) May 13 23:45:31.927241 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 23:45:31.927247 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:45:31.927254 kernel: rcu: RCU event tracing is enabled. May 13 23:45:31.927261 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 23:45:31.927267 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:45:31.927273 kernel: Tracing variant of Tasks RCU enabled. May 13 23:45:31.927280 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:45:31.927286 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 23:45:31.927293 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 23:45:31.927301 kernel: GICv3: 256 SPIs implemented May 13 23:45:31.927307 kernel: GICv3: 0 Extended SPIs implemented May 13 23:45:31.927314 kernel: Root IRQ handler: gic_handle_irq May 13 23:45:31.927320 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 23:45:31.927327 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 23:45:31.927334 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 23:45:31.927340 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 13 23:45:31.927360 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 13 23:45:31.927367 kernel: GICv3: using LPI property table @0x00000000400f0000 May 13 23:45:31.927373 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 13 23:45:31.927380 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:45:31.927388 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:45:31.927395 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 23:45:31.927402 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 23:45:31.927408 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 23:45:31.927415 kernel: arm-pv: using stolen time PV May 13 23:45:31.927422 kernel: Console: colour dummy device 80x25 May 13 23:45:31.927428 kernel: ACPI: Core revision 20230628 May 13 23:45:31.927435 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 23:45:31.927442 kernel: pid_max: default: 32768 minimum: 301 May 13 23:45:31.927448 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:45:31.927456 kernel: landlock: Up and running. May 13 23:45:31.927462 kernel: SELinux: Initializing. May 13 23:45:31.927469 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:45:31.927476 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:45:31.927482 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 23:45:31.927489 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:45:31.927496 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:45:31.927503 kernel: rcu: Hierarchical SRCU implementation. May 13 23:45:31.927510 kernel: rcu: Max phase no-delay instances is 400. May 13 23:45:31.927518 kernel: Platform MSI: ITS@0x8080000 domain created May 13 23:45:31.927525 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 23:45:31.927531 kernel: Remapping and enabling EFI services. May 13 23:45:31.927537 kernel: smp: Bringing up secondary CPUs ... May 13 23:45:31.927544 kernel: Detected PIPT I-cache on CPU1 May 13 23:45:31.927551 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 23:45:31.927557 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 13 23:45:31.927564 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:45:31.927570 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 23:45:31.927577 kernel: Detected PIPT I-cache on CPU2 May 13 23:45:31.927585 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 23:45:31.927592 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 13 23:45:31.927605 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:45:31.927613 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 23:45:31.927620 kernel: Detected PIPT I-cache on CPU3 May 13 23:45:31.927627 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 23:45:31.927634 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 13 23:45:31.927641 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:45:31.927648 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 23:45:31.927654 kernel: smp: Brought up 1 node, 4 CPUs May 13 23:45:31.927662 kernel: SMP: Total of 4 processors activated. May 13 23:45:31.927670 kernel: CPU features: detected: 32-bit EL0 Support May 13 23:45:31.927677 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 23:45:31.927684 kernel: CPU features: detected: Common not Private translations May 13 23:45:31.927691 kernel: CPU features: detected: CRC32 instructions May 13 23:45:31.927698 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 23:45:31.927705 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 23:45:31.927713 kernel: CPU features: detected: LSE atomic instructions May 13 23:45:31.927720 kernel: CPU features: detected: Privileged Access Never May 13 23:45:31.927727 kernel: CPU features: detected: RAS Extension Support May 13 23:45:31.927734 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 23:45:31.927741 kernel: CPU: All CPU(s) started at EL1 May 13 23:45:31.927755 kernel: alternatives: applying system-wide alternatives May 13 23:45:31.927804 kernel: devtmpfs: initialized May 13 23:45:31.927812 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:45:31.927819 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 23:45:31.927829 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:45:31.927836 kernel: SMBIOS 3.0.0 present. May 13 23:45:31.927843 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 13 23:45:31.927850 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:45:31.927857 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 23:45:31.927864 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 23:45:31.927871 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 23:45:31.927878 kernel: audit: initializing netlink subsys (disabled) May 13 23:45:31.927885 kernel: audit: type=2000 audit(0.027:1): state=initialized audit_enabled=0 res=1 May 13 23:45:31.927894 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:45:31.927901 kernel: cpuidle: using governor menu May 13 23:45:31.927908 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 23:45:31.927915 kernel: ASID allocator initialised with 32768 entries May 13 23:45:31.927922 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:45:31.927929 kernel: Serial: AMBA PL011 UART driver May 13 23:45:31.927936 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 23:45:31.927943 kernel: Modules: 0 pages in range for non-PLT usage May 13 23:45:31.927950 kernel: Modules: 509232 pages in range for PLT usage May 13 23:45:31.927958 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:45:31.927965 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:45:31.927972 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 23:45:31.927979 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 23:45:31.927986 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:45:31.927993 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:45:31.928000 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 23:45:31.928007 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 23:45:31.928018 kernel: ACPI: Added _OSI(Module Device) May 13 23:45:31.928028 kernel: ACPI: Added _OSI(Processor Device) May 13 23:45:31.928035 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:45:31.928042 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:45:31.928049 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:45:31.928056 kernel: ACPI: Interpreter enabled May 13 23:45:31.928063 kernel: ACPI: Using GIC for interrupt routing May 13 23:45:31.928070 kernel: ACPI: MCFG table detected, 1 entries May 13 23:45:31.928077 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 23:45:31.928084 kernel: printk: console [ttyAMA0] enabled May 13 23:45:31.928092 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 23:45:31.928242 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:45:31.928317 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 23:45:31.928390 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 23:45:31.928454 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 23:45:31.928692 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 23:45:31.928708 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 23:45:31.928722 kernel: PCI host bridge to bus 0000:00 May 13 23:45:31.928884 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 23:45:31.928956 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 23:45:31.929025 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 23:45:31.929101 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:45:31.929313 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 23:45:31.929521 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 23:45:31.929613 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 23:45:31.929682 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 23:45:31.929767 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 23:45:31.929843 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 23:45:31.929912 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 23:45:31.929979 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 23:45:31.930054 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 23:45:31.930123 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 23:45:31.930185 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 23:45:31.930195 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 23:45:31.930283 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 23:45:31.930293 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 23:45:31.930300 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 23:45:31.930307 kernel: iommu: Default domain type: Translated May 13 23:45:31.930315 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 23:45:31.930327 kernel: efivars: Registered efivars operations May 13 23:45:31.930334 kernel: vgaarb: loaded May 13 23:45:31.930342 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 23:45:31.930349 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:45:31.930356 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:45:31.930363 kernel: pnp: PnP ACPI init May 13 23:45:31.930461 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 23:45:31.930472 kernel: pnp: PnP ACPI: found 1 devices May 13 23:45:31.930483 kernel: NET: Registered PF_INET protocol family May 13 23:45:31.930490 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:45:31.930498 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 23:45:31.930505 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:45:31.930512 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:45:31.930519 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 23:45:31.930526 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 23:45:31.930533 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:45:31.930540 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:45:31.930549 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:45:31.930556 kernel: PCI: CLS 0 bytes, default 64 May 13 23:45:31.930563 kernel: kvm [1]: HYP mode not available May 13 23:45:31.930570 kernel: Initialise system trusted keyrings May 13 23:45:31.930577 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 23:45:31.930584 kernel: Key type asymmetric registered May 13 23:45:31.930591 kernel: Asymmetric key parser 'x509' registered May 13 23:45:31.930598 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 23:45:31.930605 kernel: io scheduler mq-deadline registered May 13 23:45:31.930614 kernel: io scheduler kyber registered May 13 23:45:31.930621 kernel: io scheduler bfq registered May 13 23:45:31.930628 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 23:45:31.930635 kernel: ACPI: button: Power Button [PWRB] May 13 23:45:31.930642 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 23:45:31.930821 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 23:45:31.930836 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:45:31.930843 kernel: thunder_xcv, ver 1.0 May 13 23:45:31.930850 kernel: thunder_bgx, ver 1.0 May 13 23:45:31.930862 kernel: nicpf, ver 1.0 May 13 23:45:31.930869 kernel: nicvf, ver 1.0 May 13 23:45:31.931052 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 23:45:31.931127 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T23:45:31 UTC (1747179931) May 13 23:45:31.931137 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 23:45:31.931144 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 23:45:31.931151 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 23:45:31.931159 kernel: watchdog: Hard watchdog permanently disabled May 13 23:45:31.931171 kernel: NET: Registered PF_INET6 protocol family May 13 23:45:31.931178 kernel: Segment Routing with IPv6 May 13 23:45:31.931185 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:45:31.931191 kernel: NET: Registered PF_PACKET protocol family May 13 23:45:31.931199 kernel: Key type dns_resolver registered May 13 23:45:31.931205 kernel: registered taskstats version 1 May 13 23:45:31.931213 kernel: Loading compiled-in X.509 certificates May 13 23:45:31.931220 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 568a15bbab977599d8f910f319ba50c03c8a57bd' May 13 23:45:31.931227 kernel: Key type .fscrypt registered May 13 23:45:31.931237 kernel: Key type fscrypt-provisioning registered May 13 23:45:31.931244 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:45:31.931251 kernel: ima: Allocated hash algorithm: sha1 May 13 23:45:31.931259 kernel: ima: No architecture policies found May 13 23:45:31.931266 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 23:45:31.931273 kernel: clk: Disabling unused clocks May 13 23:45:31.931355 kernel: Freeing unused kernel memory: 38464K May 13 23:45:31.931369 kernel: Run /init as init process May 13 23:45:31.931385 kernel: with arguments: May 13 23:45:31.931397 kernel: /init May 13 23:45:31.931404 kernel: with environment: May 13 23:45:31.931411 kernel: HOME=/ May 13 23:45:31.931418 kernel: TERM=linux May 13 23:45:31.931425 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:45:31.931434 systemd[1]: Successfully made /usr/ read-only. May 13 23:45:31.931445 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:45:31.931454 systemd[1]: Detected virtualization kvm. May 13 23:45:31.931464 systemd[1]: Detected architecture arm64. May 13 23:45:31.931471 systemd[1]: Running in initrd. May 13 23:45:31.931479 systemd[1]: No hostname configured, using default hostname. May 13 23:45:31.931487 systemd[1]: Hostname set to . May 13 23:45:31.931495 systemd[1]: Initializing machine ID from VM UUID. May 13 23:45:31.931502 systemd[1]: Queued start job for default target initrd.target. May 13 23:45:31.931510 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:45:31.931518 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:45:31.931528 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:45:31.931536 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:45:31.931544 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:45:31.931552 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:45:31.931561 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:45:31.931569 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:45:31.931577 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:45:31.931587 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:45:31.931595 systemd[1]: Reached target paths.target - Path Units. May 13 23:45:31.931603 systemd[1]: Reached target slices.target - Slice Units. May 13 23:45:31.931610 systemd[1]: Reached target swap.target - Swaps. May 13 23:45:31.931618 systemd[1]: Reached target timers.target - Timer Units. May 13 23:45:31.931626 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:45:31.931634 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:45:31.931641 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:45:31.931651 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:45:31.931658 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:45:31.931666 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:45:31.931673 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:45:31.931681 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:45:31.931689 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:45:31.931696 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:45:31.931704 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:45:31.931711 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:45:31.931721 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:45:31.931729 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:45:31.931736 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:45:31.931773 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:45:31.931782 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:45:31.931790 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:45:31.931800 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:45:31.931839 systemd-journald[233]: Collecting audit messages is disabled. May 13 23:45:31.931862 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:45:31.931871 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:45:31.931880 systemd-journald[233]: Journal started May 13 23:45:31.931899 systemd-journald[233]: Runtime Journal (/run/log/journal/929a80419fa144868742df80de2879e9) is 5.9M, max 47.3M, 41.4M free. May 13 23:45:31.941815 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:45:31.941854 kernel: Bridge firewalling registered May 13 23:45:31.921088 systemd-modules-load[239]: Inserted module 'overlay' May 13 23:45:31.938307 systemd-modules-load[239]: Inserted module 'br_netfilter' May 13 23:45:31.946833 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:45:31.946855 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:45:31.948199 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:45:31.952232 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:45:31.954319 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:45:31.957171 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:45:31.965198 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:45:31.969255 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:45:31.970730 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:45:31.973532 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:45:31.979690 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:45:31.982306 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:45:32.007423 dracut-cmdline[279]: dracut-dracut-053 May 13 23:45:32.009968 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:45:32.035317 systemd-resolved[280]: Positive Trust Anchors: May 13 23:45:32.035334 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:45:32.035365 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:45:32.045387 systemd-resolved[280]: Defaulting to hostname 'linux'. May 13 23:45:32.046832 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:45:32.048915 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:45:32.097785 kernel: SCSI subsystem initialized May 13 23:45:32.102763 kernel: Loading iSCSI transport class v2.0-870. May 13 23:45:32.110774 kernel: iscsi: registered transport (tcp) May 13 23:45:32.124786 kernel: iscsi: registered transport (qla4xxx) May 13 23:45:32.124843 kernel: QLogic iSCSI HBA Driver May 13 23:45:32.169936 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:45:32.172129 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:45:32.208196 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:45:32.208284 kernel: device-mapper: uevent: version 1.0.3 May 13 23:45:32.210167 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:45:32.261783 kernel: raid6: neonx8 gen() 15681 MB/s May 13 23:45:32.278766 kernel: raid6: neonx4 gen() 15788 MB/s May 13 23:45:32.295764 kernel: raid6: neonx2 gen() 13195 MB/s May 13 23:45:32.312771 kernel: raid6: neonx1 gen() 10458 MB/s May 13 23:45:32.329767 kernel: raid6: int64x8 gen() 6786 MB/s May 13 23:45:32.346776 kernel: raid6: int64x4 gen() 7322 MB/s May 13 23:45:32.363760 kernel: raid6: int64x2 gen() 6108 MB/s May 13 23:45:32.380959 kernel: raid6: int64x1 gen() 5033 MB/s May 13 23:45:32.381003 kernel: raid6: using algorithm neonx4 gen() 15788 MB/s May 13 23:45:32.398965 kernel: raid6: .... xor() 12430 MB/s, rmw enabled May 13 23:45:32.398985 kernel: raid6: using neon recovery algorithm May 13 23:45:32.403772 kernel: xor: measuring software checksum speed May 13 23:45:32.405111 kernel: 8regs : 18594 MB/sec May 13 23:45:32.405123 kernel: 32regs : 21630 MB/sec May 13 23:45:32.405776 kernel: arm64_neon : 26778 MB/sec May 13 23:45:32.405788 kernel: xor: using function: arm64_neon (26778 MB/sec) May 13 23:45:32.461767 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:45:32.475796 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:45:32.478501 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:45:32.503227 systemd-udevd[463]: Using default interface naming scheme 'v255'. May 13 23:45:32.507052 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:45:32.510952 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:45:32.545272 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation May 13 23:45:32.574646 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:45:32.577201 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:45:32.633907 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:45:32.637786 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:45:32.661903 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:45:32.663569 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:45:32.666963 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:45:32.669866 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:45:32.672882 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:45:32.694594 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:45:32.699418 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 23:45:32.699580 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 23:45:32.702127 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:45:32.702187 kernel: GPT:9289727 != 19775487 May 13 23:45:32.702934 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:45:32.704009 kernel: GPT:9289727 != 19775487 May 13 23:45:32.704054 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:45:32.704761 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:45:32.705158 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:45:32.705284 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:45:32.708528 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:45:32.710208 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:45:32.710385 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:45:32.713443 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:45:32.715953 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:45:32.742605 kernel: BTRFS: device fsid ee830c17-a93d-4109-bd12-3fec8ef6763d devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (513) May 13 23:45:32.742669 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (523) May 13 23:45:32.743618 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:45:32.756561 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 23:45:32.764358 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 23:45:32.770761 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 23:45:32.772028 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 23:45:32.780799 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:45:32.782900 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:45:32.784876 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:45:32.811323 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:45:32.832057 disk-uuid[553]: Primary Header is updated. May 13 23:45:32.832057 disk-uuid[553]: Secondary Entries is updated. May 13 23:45:32.832057 disk-uuid[553]: Secondary Header is updated. May 13 23:45:32.837389 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:45:33.846770 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:45:33.847108 disk-uuid[562]: The operation has completed successfully. May 13 23:45:33.874073 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:45:33.874181 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:45:33.899522 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:45:33.915035 sh[574]: Success May 13 23:45:33.927066 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 23:45:33.965130 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:45:33.967306 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:45:33.980511 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:45:33.986853 kernel: BTRFS info (device dm-0): first mount of filesystem ee830c17-a93d-4109-bd12-3fec8ef6763d May 13 23:45:33.986908 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 23:45:33.986919 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:45:33.987944 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:45:33.989383 kernel: BTRFS info (device dm-0): using free space tree May 13 23:45:33.994868 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:45:33.996072 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:45:33.996819 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:45:34.000222 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:45:34.028904 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:45:34.028961 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:45:34.028988 kernel: BTRFS info (device vda6): using free space tree May 13 23:45:34.031776 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:45:34.038036 kernel: BTRFS info (device vda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:45:34.044575 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:45:34.046626 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:45:34.107520 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:45:34.110947 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:45:34.156887 ignition[672]: Ignition 2.20.0 May 13 23:45:34.156898 ignition[672]: Stage: fetch-offline May 13 23:45:34.156935 ignition[672]: no configs at "/usr/lib/ignition/base.d" May 13 23:45:34.156944 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:45:34.157114 ignition[672]: parsed url from cmdline: "" May 13 23:45:34.157118 ignition[672]: no config URL provided May 13 23:45:34.157123 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:45:34.157131 ignition[672]: no config at "/usr/lib/ignition/user.ign" May 13 23:45:34.162999 systemd-networkd[757]: lo: Link UP May 13 23:45:34.157157 ignition[672]: op(1): [started] loading QEMU firmware config module May 13 23:45:34.163003 systemd-networkd[757]: lo: Gained carrier May 13 23:45:34.157162 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 23:45:34.163791 systemd-networkd[757]: Enumeration completed May 13 23:45:34.164110 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:45:34.164175 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:45:34.164179 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:45:34.173851 ignition[672]: op(1): [finished] loading QEMU firmware config module May 13 23:45:34.164892 systemd-networkd[757]: eth0: Link UP May 13 23:45:34.173876 ignition[672]: QEMU firmware config was not found. Ignoring... May 13 23:45:34.164895 systemd-networkd[757]: eth0: Gained carrier May 13 23:45:34.164901 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:45:34.165682 systemd[1]: Reached target network.target - Network. May 13 23:45:34.187792 systemd-networkd[757]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:45:34.200756 ignition[672]: parsing config with SHA512: 4410c50b696340c531a1797454b02be208273967edbcc188d13b0e6c9e2f534bf2341ef070f80d716d68f1fbdadbbe0f42224788b804de37237c887c989479a6 May 13 23:45:34.207212 unknown[672]: fetched base config from "system" May 13 23:45:34.207223 unknown[672]: fetched user config from "qemu" May 13 23:45:34.207654 ignition[672]: fetch-offline: fetch-offline passed May 13 23:45:34.209223 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:45:34.207722 ignition[672]: Ignition finished successfully May 13 23:45:34.211379 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 23:45:34.212264 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:45:34.236093 ignition[771]: Ignition 2.20.0 May 13 23:45:34.236104 ignition[771]: Stage: kargs May 13 23:45:34.236257 ignition[771]: no configs at "/usr/lib/ignition/base.d" May 13 23:45:34.236267 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:45:34.237119 ignition[771]: kargs: kargs passed May 13 23:45:34.237164 ignition[771]: Ignition finished successfully May 13 23:45:34.240811 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:45:34.243091 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:45:34.267505 ignition[779]: Ignition 2.20.0 May 13 23:45:34.267515 ignition[779]: Stage: disks May 13 23:45:34.267682 ignition[779]: no configs at "/usr/lib/ignition/base.d" May 13 23:45:34.267692 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:45:34.270274 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:45:34.268667 ignition[779]: disks: disks passed May 13 23:45:34.268845 ignition[779]: Ignition finished successfully May 13 23:45:34.273344 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:45:34.274785 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:45:34.276665 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:45:34.278703 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:45:34.280797 systemd[1]: Reached target basic.target - Basic System. May 13 23:45:34.283470 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:45:34.311078 systemd-resolved[280]: Detected conflict on linux IN A 10.0.0.67 May 13 23:45:34.311094 systemd-resolved[280]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. May 13 23:45:34.313993 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:45:34.318157 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:45:34.320681 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:45:34.382774 kernel: EXT4-fs (vda9): mounted filesystem 9f8d74e6-c079-469f-823a-18a62077a2c7 r/w with ordered data mode. Quota mode: none. May 13 23:45:34.383396 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:45:34.384719 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:45:34.388009 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:45:34.390372 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:45:34.391474 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 23:45:34.391521 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:45:34.391560 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:45:34.400397 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:45:34.402631 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:45:34.408549 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (799) May 13 23:45:34.408582 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:45:34.408593 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:45:34.410186 kernel: BTRFS info (device vda6): using free space tree May 13 23:45:34.421770 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:45:34.424348 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:45:34.456890 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:45:34.460194 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory May 13 23:45:34.463411 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:45:34.467516 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:45:34.543015 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:45:34.545200 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:45:34.546872 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:45:34.560815 kernel: BTRFS info (device vda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:45:34.576848 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:45:34.587157 ignition[913]: INFO : Ignition 2.20.0 May 13 23:45:34.587157 ignition[913]: INFO : Stage: mount May 13 23:45:34.588876 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:45:34.588876 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:45:34.588876 ignition[913]: INFO : mount: mount passed May 13 23:45:34.588876 ignition[913]: INFO : Ignition finished successfully May 13 23:45:34.591144 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:45:34.593834 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:45:34.985687 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:45:34.987187 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:45:35.008714 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927) May 13 23:45:35.008776 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:45:35.009843 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:45:35.013166 kernel: BTRFS info (device vda6): using free space tree May 13 23:45:35.015773 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:45:35.019616 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:45:35.046248 ignition[944]: INFO : Ignition 2.20.0 May 13 23:45:35.046248 ignition[944]: INFO : Stage: files May 13 23:45:35.048077 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:45:35.048077 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:45:35.048077 ignition[944]: DEBUG : files: compiled without relabeling support, skipping May 13 23:45:35.051602 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:45:35.051602 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:45:35.054782 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:45:35.056159 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:45:35.056159 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:45:35.055477 unknown[944]: wrote ssh authorized keys file for user: core May 13 23:45:35.060256 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 13 23:45:35.060256 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 13 23:45:35.138953 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:45:35.409975 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 13 23:45:35.409975 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 23:45:35.413916 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:45:35.413916 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:45:35.413916 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:45:35.413916 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:45:35.422759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:45:35.422759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:45:35.422759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:45:35.422759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:45:35.422759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:45:35.422759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 23:45:35.422759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 23:45:35.422759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 23:45:35.422759 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 13 23:45:35.724084 systemd-networkd[757]: eth0: Gained IPv6LL May 13 23:45:35.734170 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 23:45:36.009201 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 23:45:36.009201 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 23:45:36.013913 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:45:36.013913 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:45:36.013913 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 23:45:36.013913 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 23:45:36.013913 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:45:36.013913 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:45:36.013913 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 23:45:36.013913 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 13 23:45:36.037979 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:45:36.041836 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:45:36.043838 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 13 23:45:36.043838 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 13 23:45:36.043838 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:45:36.043838 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:45:36.043838 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:45:36.043838 ignition[944]: INFO : files: files passed May 13 23:45:36.043838 ignition[944]: INFO : Ignition finished successfully May 13 23:45:36.045304 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:45:36.047662 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:45:36.049593 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:45:36.065056 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:45:36.065185 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:45:36.068788 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory May 13 23:45:36.070575 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:45:36.070575 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:45:36.074478 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:45:36.072717 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:45:36.075981 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:45:36.080522 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:45:36.142175 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:45:36.142319 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:45:36.144636 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:45:36.147340 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:45:36.149598 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:45:36.150688 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:45:36.181639 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:45:36.184425 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:45:36.206460 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:45:36.208274 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:45:36.210489 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:45:36.212350 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:45:36.212480 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:45:36.215254 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:45:36.217515 systemd[1]: Stopped target basic.target - Basic System. May 13 23:45:36.219452 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:45:36.221001 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:45:36.223043 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:45:36.225248 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:45:36.227210 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:45:36.229261 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:45:36.231377 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:45:36.233239 systemd[1]: Stopped target swap.target - Swaps. May 13 23:45:36.234942 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:45:36.235099 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:45:36.237715 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:45:36.239976 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:45:36.241996 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:45:36.242861 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:45:36.244157 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:45:36.244293 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:45:36.247230 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:45:36.247378 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:45:36.249446 systemd[1]: Stopped target paths.target - Path Units. May 13 23:45:36.251160 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:45:36.257782 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:45:36.259156 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:45:36.261558 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:45:36.263423 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:45:36.263516 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:45:36.265102 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:45:36.265184 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:45:36.266775 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:45:36.266900 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:45:36.268786 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:45:36.268908 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:45:36.271445 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:45:36.274587 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:45:36.275538 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:45:36.275672 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:45:36.277617 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:45:36.277727 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:45:36.291017 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:45:36.291116 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:45:36.300179 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:45:36.303269 ignition[1000]: INFO : Ignition 2.20.0 May 13 23:45:36.303269 ignition[1000]: INFO : Stage: umount May 13 23:45:36.305206 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:45:36.305206 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:45:36.305206 ignition[1000]: INFO : umount: umount passed May 13 23:45:36.305206 ignition[1000]: INFO : Ignition finished successfully May 13 23:45:36.307085 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:45:36.307190 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:45:36.308973 systemd[1]: Stopped target network.target - Network. May 13 23:45:36.310423 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:45:36.310499 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:45:36.311583 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:45:36.311635 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:45:36.313377 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:45:36.313427 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:45:36.315244 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:45:36.315293 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:45:36.317305 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:45:36.319818 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:45:36.328379 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:45:36.329801 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:45:36.332728 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:45:36.332991 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:45:36.333085 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:45:36.337282 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:45:36.338113 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:45:36.338168 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:45:36.340792 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:45:36.341681 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:45:36.341771 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:45:36.344322 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:45:36.344376 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:45:36.347303 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:45:36.347353 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:45:36.348594 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:45:36.348648 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:45:36.351611 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:45:36.356689 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:45:36.356782 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:45:36.375038 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:45:36.375150 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:45:36.377375 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:45:36.377517 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:45:36.379488 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:45:36.380854 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:45:36.383262 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:45:36.383315 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:45:36.385252 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:45:36.385295 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:45:36.390672 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:45:36.390755 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:45:36.394600 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:45:36.394668 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:45:36.397574 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:45:36.397648 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:45:36.401883 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:45:36.401963 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:45:36.407144 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:45:36.408339 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:45:36.408413 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:45:36.411757 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 23:45:36.411814 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:45:36.414177 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:45:36.414238 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:45:36.416636 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:45:36.416698 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:45:36.420726 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:45:36.420809 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:45:36.421151 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:45:36.421241 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:45:36.423125 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:45:36.426192 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:45:36.442643 systemd[1]: Switching root. May 13 23:45:36.473064 systemd-journald[233]: Journal stopped May 13 23:45:37.387383 systemd-journald[233]: Received SIGTERM from PID 1 (systemd). May 13 23:45:37.387448 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:45:37.387465 kernel: SELinux: policy capability open_perms=1 May 13 23:45:37.387475 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:45:37.387485 kernel: SELinux: policy capability always_check_network=0 May 13 23:45:37.387494 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:45:37.387503 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:45:37.387513 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:45:37.387522 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:45:37.387533 kernel: audit: type=1403 audit(1747179936.633:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:45:37.387544 systemd[1]: Successfully loaded SELinux policy in 33.681ms. May 13 23:45:37.387565 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.567ms. May 13 23:45:37.387576 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:45:37.387587 systemd[1]: Detected virtualization kvm. May 13 23:45:37.387598 systemd[1]: Detected architecture arm64. May 13 23:45:37.387609 systemd[1]: Detected first boot. May 13 23:45:37.387621 systemd[1]: Initializing machine ID from VM UUID. May 13 23:45:37.387632 zram_generator::config[1047]: No configuration found. May 13 23:45:37.387645 kernel: NET: Registered PF_VSOCK protocol family May 13 23:45:37.387655 systemd[1]: Populated /etc with preset unit settings. May 13 23:45:37.387665 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:45:37.387676 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:45:37.387686 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:45:37.387696 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:45:37.387707 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:45:37.387718 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:45:37.387730 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:45:37.387741 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:45:37.387761 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:45:37.387773 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:45:37.387783 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:45:37.387794 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:45:37.387805 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:45:37.387816 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:45:37.387827 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:45:37.387840 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:45:37.387851 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:45:37.387863 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:45:37.387874 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 23:45:37.387884 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:45:37.387895 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:45:37.387907 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:45:37.387919 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:45:37.387937 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:45:37.387950 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:45:37.387961 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:45:37.387972 systemd[1]: Reached target slices.target - Slice Units. May 13 23:45:37.387982 systemd[1]: Reached target swap.target - Swaps. May 13 23:45:37.387993 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:45:37.388003 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:45:37.388014 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:45:37.388027 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:45:37.388038 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:45:37.388048 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:45:37.388060 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:45:37.388071 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:45:37.388081 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:45:37.388092 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:45:37.388104 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:45:37.388115 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:45:37.388127 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:45:37.388143 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:45:37.388154 systemd[1]: Reached target machines.target - Containers. May 13 23:45:37.388164 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:45:37.388175 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:45:37.388185 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:45:37.388196 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:45:37.388206 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:45:37.388217 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:45:37.388230 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:45:37.388240 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:45:37.388251 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:45:37.388263 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:45:37.388273 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:45:37.388284 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:45:37.388294 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:45:37.388304 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:45:37.388318 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:45:37.388329 kernel: fuse: init (API version 7.39) May 13 23:45:37.388340 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:45:37.388350 kernel: loop: module loaded May 13 23:45:37.388360 kernel: ACPI: bus type drm_connector registered May 13 23:45:37.388370 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:45:37.388381 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:45:37.388392 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:45:37.388402 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:45:37.388415 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:45:37.388426 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:45:37.388436 systemd[1]: Stopped verity-setup.service. May 13 23:45:37.388452 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:45:37.388463 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:45:37.388475 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:45:37.388486 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:45:37.388496 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:45:37.388507 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:45:37.388543 systemd-journald[1108]: Collecting audit messages is disabled. May 13 23:45:37.388570 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:45:37.390370 systemd-journald[1108]: Journal started May 13 23:45:37.392319 systemd-journald[1108]: Runtime Journal (/run/log/journal/929a80419fa144868742df80de2879e9) is 5.9M, max 47.3M, 41.4M free. May 13 23:45:37.392407 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:45:37.392443 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:45:37.107358 systemd[1]: Queued start job for default target multi-user.target. May 13 23:45:37.118946 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 23:45:37.119382 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:45:37.396261 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:45:37.397191 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:45:37.400232 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:45:37.400427 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:45:37.402177 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:45:37.402352 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:45:37.403900 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:45:37.404102 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:45:37.405883 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:45:37.406078 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:45:37.407544 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:45:37.407722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:45:37.409172 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:45:37.411788 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:45:37.413614 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:45:37.415434 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:45:37.429909 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:45:37.433331 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:45:37.435940 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:45:37.437212 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:45:37.437254 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:45:37.439519 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:45:37.451846 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:45:37.454372 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:45:37.455653 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:45:37.457414 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:45:37.460579 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:45:37.461860 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:45:37.465943 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:45:37.473919 systemd-journald[1108]: Time spent on flushing to /var/log/journal/929a80419fa144868742df80de2879e9 is 23.468ms for 867 entries. May 13 23:45:37.473919 systemd-journald[1108]: System Journal (/var/log/journal/929a80419fa144868742df80de2879e9) is 8M, max 195.6M, 187.6M free. May 13 23:45:37.514404 systemd-journald[1108]: Received client request to flush runtime journal. May 13 23:45:37.514460 kernel: loop0: detected capacity change from 0 to 126448 May 13 23:45:37.467337 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:45:37.471229 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:45:37.473956 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:45:37.480090 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:45:37.484792 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:45:37.486453 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:45:37.490106 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:45:37.499004 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:45:37.502170 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:45:37.507679 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:45:37.514005 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:45:37.519591 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:45:37.521657 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:45:37.524676 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. May 13 23:45:37.524700 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. May 13 23:45:37.527851 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:45:37.532566 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:45:37.537443 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:45:37.540555 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:45:37.547753 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 23:45:37.562815 kernel: loop1: detected capacity change from 0 to 201592 May 13 23:45:37.570094 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:45:37.593793 kernel: loop2: detected capacity change from 0 to 103832 May 13 23:45:37.599358 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:45:37.603161 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:45:37.623806 kernel: loop3: detected capacity change from 0 to 126448 May 13 23:45:37.630837 kernel: loop4: detected capacity change from 0 to 201592 May 13 23:45:37.634560 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. May 13 23:45:37.634577 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. May 13 23:45:37.637798 kernel: loop5: detected capacity change from 0 to 103832 May 13 23:45:37.641054 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:45:37.643569 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 23:45:37.644090 (sd-merge)[1192]: Merged extensions into '/usr'. May 13 23:45:37.647885 systemd[1]: Reload requested from client PID 1164 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:45:37.647907 systemd[1]: Reloading... May 13 23:45:37.723882 zram_generator::config[1224]: No configuration found. May 13 23:45:37.819601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:45:37.845321 ldconfig[1159]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:45:37.873128 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:45:37.873941 systemd[1]: Reloading finished in 225 ms. May 13 23:45:37.897866 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:45:37.899838 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:45:37.916454 systemd[1]: Starting ensure-sysext.service... May 13 23:45:37.918862 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:45:37.936128 systemd[1]: Reload requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... May 13 23:45:37.936155 systemd[1]: Reloading... May 13 23:45:37.939310 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:45:37.939559 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:45:37.940312 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:45:37.940571 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. May 13 23:45:37.940625 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. May 13 23:45:37.952418 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:45:37.952432 systemd-tmpfiles[1259]: Skipping /boot May 13 23:45:37.962357 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:45:37.962372 systemd-tmpfiles[1259]: Skipping /boot May 13 23:45:37.995967 zram_generator::config[1288]: No configuration found. May 13 23:45:38.098020 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:45:38.150120 systemd[1]: Reloading finished in 213 ms. May 13 23:45:38.162786 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:45:38.178271 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:45:38.189277 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:45:38.202986 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:45:38.205716 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:45:38.212052 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:45:38.220267 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:45:38.225349 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:45:38.229792 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:45:38.240234 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:45:38.242776 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:45:38.248066 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:45:38.253819 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:45:38.253987 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:45:38.262802 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:45:38.264949 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:45:38.265121 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:45:38.266801 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:45:38.266978 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:45:38.269183 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:45:38.269359 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:45:38.278378 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:45:38.280852 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:45:38.283368 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:45:38.294478 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:45:38.296436 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:45:38.296587 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:45:38.297542 systemd-udevd[1329]: Using default interface naming scheme 'v255'. May 13 23:45:38.300881 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:45:38.301491 augenrules[1360]: No rules May 13 23:45:38.304106 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:45:38.308600 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:45:38.308858 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:45:38.310579 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:45:38.312530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:45:38.312705 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:45:38.314601 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:45:38.314811 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:45:38.316664 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:45:38.316848 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:45:38.318543 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:45:38.323055 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:45:38.326239 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:45:38.339851 systemd[1]: Finished ensure-sysext.service. May 13 23:45:38.347901 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:45:38.349408 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:45:38.350785 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:45:38.360489 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:45:38.365291 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:45:38.367854 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:45:38.369934 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:45:38.369996 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:45:38.373070 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:45:38.377896 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 23:45:38.379882 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:45:38.380383 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:45:38.382344 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:45:38.382522 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:45:38.384899 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:45:38.386871 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:45:38.389688 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:45:38.390045 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:45:38.392077 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:45:38.392246 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:45:38.407044 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:45:38.407112 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:45:38.412963 augenrules[1393]: /sbin/augenrules: No change May 13 23:45:38.427058 augenrules[1429]: No rules May 13 23:45:38.453691 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:45:38.454943 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:45:38.457201 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 23:45:38.464061 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1374) May 13 23:45:38.468961 systemd-resolved[1328]: Positive Trust Anchors: May 13 23:45:38.468979 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:45:38.469011 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:45:38.481179 systemd-resolved[1328]: Defaulting to hostname 'linux'. May 13 23:45:38.484441 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:45:38.487693 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 23:45:38.488174 systemd-networkd[1402]: lo: Link UP May 13 23:45:38.488184 systemd-networkd[1402]: lo: Gained carrier May 13 23:45:38.490351 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:45:38.492666 systemd-networkd[1402]: Enumeration completed May 13 23:45:38.492783 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:45:38.493926 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:45:38.495102 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:45:38.495106 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:45:38.495390 systemd[1]: Reached target network.target - Network. May 13 23:45:38.495844 systemd-networkd[1402]: eth0: Link UP May 13 23:45:38.495849 systemd-networkd[1402]: eth0: Gained carrier May 13 23:45:38.495862 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:45:38.498051 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:45:38.503035 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:45:38.513597 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:45:38.514219 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. May 13 23:45:38.515380 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 23:45:38.515440 systemd-timesyncd[1404]: Initial clock synchronization to Tue 2025-05-13 23:45:38.394913 UTC. May 13 23:45:38.521268 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:45:38.524488 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:45:38.531026 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:45:38.553145 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:45:38.568867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:45:38.585813 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:45:38.589279 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:45:38.611116 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:45:38.626188 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:45:38.643404 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:45:38.644970 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:45:38.646088 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:45:38.647239 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:45:38.648487 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:45:38.649956 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:45:38.651206 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:45:38.652529 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:45:38.653873 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:45:38.653933 systemd[1]: Reached target paths.target - Path Units. May 13 23:45:38.654983 systemd[1]: Reached target timers.target - Timer Units. May 13 23:45:38.657816 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:45:38.660296 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:45:38.663629 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:45:38.665248 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:45:38.666550 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:45:38.669970 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:45:38.671570 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:45:38.674212 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:45:38.675969 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:45:38.677109 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:45:38.678113 systemd[1]: Reached target basic.target - Basic System. May 13 23:45:38.679083 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:45:38.679120 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:45:38.680272 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:45:38.682102 lvm[1459]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:45:38.682918 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:45:38.685704 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:45:38.690938 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:45:38.692032 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:45:38.693271 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:45:38.697946 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:45:38.701238 jq[1462]: false May 13 23:45:38.702091 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:45:38.705529 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:45:38.709597 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:45:38.713009 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:45:38.713548 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:45:38.714273 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:45:38.716478 dbus-daemon[1461]: [system] SELinux support is enabled May 13 23:45:38.720104 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:45:38.722064 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:45:38.728430 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:45:38.731163 jq[1478]: true May 13 23:45:38.733190 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:45:38.733389 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:45:38.733677 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:45:38.733870 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:45:38.736698 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:45:38.736973 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:45:38.739289 extend-filesystems[1463]: Found loop3 May 13 23:45:38.741952 extend-filesystems[1463]: Found loop4 May 13 23:45:38.741952 extend-filesystems[1463]: Found loop5 May 13 23:45:38.741952 extend-filesystems[1463]: Found vda May 13 23:45:38.741952 extend-filesystems[1463]: Found vda1 May 13 23:45:38.741952 extend-filesystems[1463]: Found vda2 May 13 23:45:38.741952 extend-filesystems[1463]: Found vda3 May 13 23:45:38.741952 extend-filesystems[1463]: Found usr May 13 23:45:38.741952 extend-filesystems[1463]: Found vda4 May 13 23:45:38.741952 extend-filesystems[1463]: Found vda6 May 13 23:45:38.751027 extend-filesystems[1463]: Found vda7 May 13 23:45:38.751027 extend-filesystems[1463]: Found vda9 May 13 23:45:38.751027 extend-filesystems[1463]: Checking size of /dev/vda9 May 13 23:45:38.754066 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:45:38.755384 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:45:38.755437 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:45:38.760877 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:45:38.760913 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:45:38.764616 update_engine[1475]: I20250513 23:45:38.764232 1475 main.cc:92] Flatcar Update Engine starting May 13 23:45:38.766977 jq[1482]: true May 13 23:45:38.768489 update_engine[1475]: I20250513 23:45:38.768426 1475 update_check_scheduler.cc:74] Next update check in 4m30s May 13 23:45:38.770643 systemd[1]: Started update-engine.service - Update Engine. May 13 23:45:38.776943 tar[1481]: linux-arm64/LICENSE May 13 23:45:38.776943 tar[1481]: linux-arm64/helm May 13 23:45:38.775360 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:45:38.778112 extend-filesystems[1463]: Resized partition /dev/vda9 May 13 23:45:38.781811 extend-filesystems[1501]: resize2fs 1.47.2 (1-Jan-2025) May 13 23:45:38.798760 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1384) May 13 23:45:38.798848 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 23:45:38.795818 systemd-logind[1474]: Watching system buttons on /dev/input/event0 (Power Button) May 13 23:45:38.796860 systemd-logind[1474]: New seat seat0. May 13 23:45:38.799897 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:45:38.970590 sshd_keygen[1479]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:45:38.991191 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:45:38.994769 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 23:45:38.996048 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:45:39.008267 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:45:39.013753 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:45:39.014041 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:45:39.017133 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:45:39.047677 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:45:39.049786 extend-filesystems[1501]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 23:45:39.049786 extend-filesystems[1501]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 23:45:39.049786 extend-filesystems[1501]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 23:45:39.058122 extend-filesystems[1463]: Resized filesystem in /dev/vda9 May 13 23:45:39.050355 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:45:39.050540 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:45:39.058384 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:45:39.063271 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 23:45:39.064836 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:45:39.070493 bash[1516]: Updated "/home/core/.ssh/authorized_keys" May 13 23:45:39.072289 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:45:39.074670 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:45:39.201196 containerd[1483]: time="2025-05-13T23:45:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:45:39.206596 containerd[1483]: time="2025-05-13T23:45:39.205314614Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:45:39.215662 containerd[1483]: time="2025-05-13T23:45:39.215587747Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.801µs" May 13 23:45:39.215662 containerd[1483]: time="2025-05-13T23:45:39.215644204Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:45:39.215662 containerd[1483]: time="2025-05-13T23:45:39.215666386Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:45:39.215943 containerd[1483]: time="2025-05-13T23:45:39.215910969Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:45:39.215943 containerd[1483]: time="2025-05-13T23:45:39.215938745Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:45:39.215990 containerd[1483]: time="2025-05-13T23:45:39.215969475Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:45:39.216048 containerd[1483]: time="2025-05-13T23:45:39.216021205Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:45:39.216048 containerd[1483]: time="2025-05-13T23:45:39.216036846Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:45:39.216405 containerd[1483]: time="2025-05-13T23:45:39.216376221Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:45:39.216405 containerd[1483]: time="2025-05-13T23:45:39.216400451Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:45:39.216450 containerd[1483]: time="2025-05-13T23:45:39.216412034Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:45:39.216450 containerd[1483]: time="2025-05-13T23:45:39.216420111Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:45:39.216503 containerd[1483]: time="2025-05-13T23:45:39.216490397Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:45:39.216714 containerd[1483]: time="2025-05-13T23:45:39.216689988Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:45:39.216780 containerd[1483]: time="2025-05-13T23:45:39.216764371Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:45:39.216802 containerd[1483]: time="2025-05-13T23:45:39.216780367Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:45:39.216836 containerd[1483]: time="2025-05-13T23:45:39.216822444Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:45:39.217150 containerd[1483]: time="2025-05-13T23:45:39.217115251Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:45:39.217211 containerd[1483]: time="2025-05-13T23:45:39.217197081Z" level=info msg="metadata content store policy set" policy=shared May 13 23:45:39.221894 containerd[1483]: time="2025-05-13T23:45:39.221811347Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:45:39.221894 containerd[1483]: time="2025-05-13T23:45:39.221873281Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:45:39.221894 containerd[1483]: time="2025-05-13T23:45:39.221887700Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:45:39.221976 containerd[1483]: time="2025-05-13T23:45:39.221899598Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:45:39.222053 containerd[1483]: time="2025-05-13T23:45:39.222023466Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:45:39.222053 containerd[1483]: time="2025-05-13T23:45:39.222041038Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:45:39.222053 containerd[1483]: time="2025-05-13T23:45:39.222056206Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:45:39.222126 containerd[1483]: time="2025-05-13T23:45:39.222069444Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:45:39.222126 containerd[1483]: time="2025-05-13T23:45:39.222081736Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:45:39.222126 containerd[1483]: time="2025-05-13T23:45:39.222092964Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:45:39.222126 containerd[1483]: time="2025-05-13T23:45:39.222104311Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:45:39.222126 containerd[1483]: time="2025-05-13T23:45:39.222116091Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:45:39.222328 containerd[1483]: time="2025-05-13T23:45:39.222296377Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:45:39.222328 containerd[1483]: time="2025-05-13T23:45:39.222325177Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:45:39.222385 containerd[1483]: time="2025-05-13T23:45:39.222338611Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:45:39.222385 containerd[1483]: time="2025-05-13T23:45:39.222352361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:45:39.222385 containerd[1483]: time="2025-05-13T23:45:39.222366033Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:45:39.222385 containerd[1483]: time="2025-05-13T23:45:39.222376788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:45:39.222450 containerd[1483]: time="2025-05-13T23:45:39.222388568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:45:39.222450 containerd[1483]: time="2025-05-13T23:45:39.222399324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:45:39.222450 containerd[1483]: time="2025-05-13T23:45:39.222416147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:45:39.222450 containerd[1483]: time="2025-05-13T23:45:39.222428360Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:45:39.222450 containerd[1483]: time="2025-05-13T23:45:39.222438958Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:45:39.223027 containerd[1483]: time="2025-05-13T23:45:39.223002194Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:45:39.223056 containerd[1483]: time="2025-05-13T23:45:39.223026345Z" level=info msg="Start snapshots syncer" May 13 23:45:39.223075 containerd[1483]: time="2025-05-13T23:45:39.223062552Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:45:39.223418 containerd[1483]: time="2025-05-13T23:45:39.223382701Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:45:39.223524 containerd[1483]: time="2025-05-13T23:45:39.223441404Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:45:39.223571 containerd[1483]: time="2025-05-13T23:45:39.223530168Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:45:39.223700 containerd[1483]: time="2025-05-13T23:45:39.223679880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:45:39.223727 containerd[1483]: time="2025-05-13T23:45:39.223710099Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:45:39.223757 containerd[1483]: time="2025-05-13T23:45:39.223733580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:45:39.223777 containerd[1483]: time="2025-05-13T23:45:39.223757928Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:45:39.223795 containerd[1483]: time="2025-05-13T23:45:39.223779912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:45:39.223813 containerd[1483]: time="2025-05-13T23:45:39.223797484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:45:39.223813 containerd[1483]: time="2025-05-13T23:45:39.223808515Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:45:39.223856 containerd[1483]: time="2025-05-13T23:45:39.223834400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:45:39.223856 containerd[1483]: time="2025-05-13T23:45:39.223847283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:45:39.223889 containerd[1483]: time="2025-05-13T23:45:39.223857014Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:45:39.223918 containerd[1483]: time="2025-05-13T23:45:39.223908823Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:45:39.223938 containerd[1483]: time="2025-05-13T23:45:39.223925646Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:45:39.223938 containerd[1483]: time="2025-05-13T23:45:39.223935141Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:45:39.223978 containerd[1483]: time="2025-05-13T23:45:39.223945148Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:45:39.223978 containerd[1483]: time="2025-05-13T23:45:39.223952791Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:45:39.223978 containerd[1483]: time="2025-05-13T23:45:39.223963035Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:45:39.223978 containerd[1483]: time="2025-05-13T23:45:39.223973790Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:45:39.224211 containerd[1483]: time="2025-05-13T23:45:39.224195838Z" level=info msg="runtime interface created" May 13 23:45:39.224211 containerd[1483]: time="2025-05-13T23:45:39.224206791Z" level=info msg="created NRI interface" May 13 23:45:39.224257 containerd[1483]: time="2025-05-13T23:45:39.224216798Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:45:39.224257 containerd[1483]: time="2025-05-13T23:45:39.224228893Z" level=info msg="Connect containerd service" May 13 23:45:39.224299 containerd[1483]: time="2025-05-13T23:45:39.224257181Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:45:39.231868 containerd[1483]: time="2025-05-13T23:45:39.231817991Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:45:39.235751 tar[1481]: linux-arm64/README.md May 13 23:45:39.261013 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:45:39.363492 containerd[1483]: time="2025-05-13T23:45:39.363435935Z" level=info msg="Start subscribing containerd event" May 13 23:45:39.363492 containerd[1483]: time="2025-05-13T23:45:39.363504093Z" level=info msg="Start recovering state" May 13 23:45:39.363690 containerd[1483]: time="2025-05-13T23:45:39.363603298Z" level=info msg="Start event monitor" May 13 23:45:39.363690 containerd[1483]: time="2025-05-13T23:45:39.363625282Z" level=info msg="Start cni network conf syncer for default" May 13 23:45:39.363690 containerd[1483]: time="2025-05-13T23:45:39.363634265Z" level=info msg="Start streaming server" May 13 23:45:39.363690 containerd[1483]: time="2025-05-13T23:45:39.363644035Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:45:39.363690 containerd[1483]: time="2025-05-13T23:45:39.363652388Z" level=info msg="runtime interface starting up..." May 13 23:45:39.363690 containerd[1483]: time="2025-05-13T23:45:39.363658219Z" level=info msg="starting plugins..." May 13 23:45:39.363690 containerd[1483]: time="2025-05-13T23:45:39.363672126Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:45:39.363998 containerd[1483]: time="2025-05-13T23:45:39.363932587Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:45:39.363998 containerd[1483]: time="2025-05-13T23:45:39.363981007Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:45:39.364579 containerd[1483]: time="2025-05-13T23:45:39.364074223Z" level=info msg="containerd successfully booted in 0.163273s" May 13 23:45:39.364209 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:45:39.947868 systemd-networkd[1402]: eth0: Gained IPv6LL May 13 23:45:39.953429 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:45:39.955318 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:45:39.957979 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 23:45:39.960390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:45:39.968555 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:45:39.991007 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 23:45:39.991208 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 23:45:39.992939 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:45:39.995063 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:45:40.548210 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:45:40.550004 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:45:40.552513 (kubelet)[1588]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:45:40.554908 systemd[1]: Startup finished in 656ms (kernel) + 4.914s (initrd) + 3.962s (userspace) = 9.534s. May 13 23:45:41.039140 kubelet[1588]: E0513 23:45:41.038993 1588 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:45:41.041280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:45:41.041427 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:45:41.041711 systemd[1]: kubelet.service: Consumed 845ms CPU time, 251.7M memory peak. May 13 23:45:44.215795 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:45:44.217127 systemd[1]: Started sshd@0-10.0.0.67:22-10.0.0.1:53432.service - OpenSSH per-connection server daemon (10.0.0.1:53432). May 13 23:45:44.367070 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 53432 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:45:44.368239 sshd-session[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:44.380170 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:45:44.381168 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:45:44.393155 systemd-logind[1474]: New session 1 of user core. May 13 23:45:44.406762 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:45:44.413162 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:45:44.428084 (systemd)[1606]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:45:44.430722 systemd-logind[1474]: New session c1 of user core. May 13 23:45:44.552133 systemd[1606]: Queued start job for default target default.target. May 13 23:45:44.565835 systemd[1606]: Created slice app.slice - User Application Slice. May 13 23:45:44.565865 systemd[1606]: Reached target paths.target - Paths. May 13 23:45:44.565909 systemd[1606]: Reached target timers.target - Timers. May 13 23:45:44.567359 systemd[1606]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:45:44.578391 systemd[1606]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:45:44.578518 systemd[1606]: Reached target sockets.target - Sockets. May 13 23:45:44.578564 systemd[1606]: Reached target basic.target - Basic System. May 13 23:45:44.578595 systemd[1606]: Reached target default.target - Main User Target. May 13 23:45:44.578620 systemd[1606]: Startup finished in 141ms. May 13 23:45:44.578990 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:45:44.580724 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:45:44.640042 systemd[1]: Started sshd@1-10.0.0.67:22-10.0.0.1:53436.service - OpenSSH per-connection server daemon (10.0.0.1:53436). May 13 23:45:44.698096 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 53436 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:45:44.700177 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:44.704835 systemd-logind[1474]: New session 2 of user core. May 13 23:45:44.717961 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:45:44.773505 sshd[1619]: Connection closed by 10.0.0.1 port 53436 May 13 23:45:44.774075 sshd-session[1617]: pam_unix(sshd:session): session closed for user core May 13 23:45:44.789051 systemd[1]: sshd@1-10.0.0.67:22-10.0.0.1:53436.service: Deactivated successfully. May 13 23:45:44.790912 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:45:44.794539 systemd-logind[1474]: Session 2 logged out. Waiting for processes to exit. May 13 23:45:44.796628 systemd[1]: Started sshd@2-10.0.0.67:22-10.0.0.1:53450.service - OpenSSH per-connection server daemon (10.0.0.1:53450). May 13 23:45:44.797661 systemd-logind[1474]: Removed session 2. May 13 23:45:44.855816 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 53450 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:45:44.857645 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:44.862860 systemd-logind[1474]: New session 3 of user core. May 13 23:45:44.871948 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:45:44.921540 sshd[1627]: Connection closed by 10.0.0.1 port 53450 May 13 23:45:44.923338 sshd-session[1624]: pam_unix(sshd:session): session closed for user core May 13 23:45:44.933529 systemd[1]: sshd@2-10.0.0.67:22-10.0.0.1:53450.service: Deactivated successfully. May 13 23:45:44.940287 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:45:44.943255 systemd-logind[1474]: Session 3 logged out. Waiting for processes to exit. May 13 23:45:44.945369 systemd[1]: Started sshd@3-10.0.0.67:22-10.0.0.1:53464.service - OpenSSH per-connection server daemon (10.0.0.1:53464). May 13 23:45:44.946385 systemd-logind[1474]: Removed session 3. May 13 23:45:45.007024 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 53464 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:45:45.008455 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:45.013297 systemd-logind[1474]: New session 4 of user core. May 13 23:45:45.023959 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:45:45.078899 sshd[1635]: Connection closed by 10.0.0.1 port 53464 May 13 23:45:45.079710 sshd-session[1632]: pam_unix(sshd:session): session closed for user core May 13 23:45:45.100961 systemd[1]: sshd@3-10.0.0.67:22-10.0.0.1:53464.service: Deactivated successfully. May 13 23:45:45.105581 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:45:45.107254 systemd-logind[1474]: Session 4 logged out. Waiting for processes to exit. May 13 23:45:45.109131 systemd[1]: Started sshd@4-10.0.0.67:22-10.0.0.1:53480.service - OpenSSH per-connection server daemon (10.0.0.1:53480). May 13 23:45:45.109978 systemd-logind[1474]: Removed session 4. May 13 23:45:45.174901 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 53480 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:45:45.176413 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:45.181583 systemd-logind[1474]: New session 5 of user core. May 13 23:45:45.190929 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:45:45.255733 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:45:45.256077 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:45:45.700455 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:45:45.716178 (dockerd)[1665]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:45:46.083478 dockerd[1665]: time="2025-05-13T23:45:46.083338705Z" level=info msg="Starting up" May 13 23:45:46.085042 dockerd[1665]: time="2025-05-13T23:45:46.085014901Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:45:46.279695 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1077481273-merged.mount: Deactivated successfully. May 13 23:45:46.301669 dockerd[1665]: time="2025-05-13T23:45:46.301621744Z" level=info msg="Loading containers: start." May 13 23:45:46.467811 kernel: Initializing XFRM netlink socket May 13 23:45:46.544730 systemd-networkd[1402]: docker0: Link UP May 13 23:45:46.613165 dockerd[1665]: time="2025-05-13T23:45:46.613110350Z" level=info msg="Loading containers: done." May 13 23:45:46.631145 dockerd[1665]: time="2025-05-13T23:45:46.631066574Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:45:46.631319 dockerd[1665]: time="2025-05-13T23:45:46.631175844Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:45:46.633154 dockerd[1665]: time="2025-05-13T23:45:46.633123666Z" level=info msg="Daemon has completed initialization" May 13 23:45:46.670116 dockerd[1665]: time="2025-05-13T23:45:46.670049173Z" level=info msg="API listen on /run/docker.sock" May 13 23:45:46.670279 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:45:47.302968 containerd[1483]: time="2025-05-13T23:45:47.302923566Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 23:45:47.965687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1839238552.mount: Deactivated successfully. May 13 23:45:48.908089 containerd[1483]: time="2025-05-13T23:45:48.908030882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:48.908487 containerd[1483]: time="2025-05-13T23:45:48.908324229Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 13 23:45:48.909377 containerd[1483]: time="2025-05-13T23:45:48.909344114Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:48.911796 containerd[1483]: time="2025-05-13T23:45:48.911757466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:48.912833 containerd[1483]: time="2025-05-13T23:45:48.912796982Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 1.60982904s" May 13 23:45:48.912879 containerd[1483]: time="2025-05-13T23:45:48.912839031Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 13 23:45:48.913488 containerd[1483]: time="2025-05-13T23:45:48.913448302Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 23:45:50.163992 containerd[1483]: time="2025-05-13T23:45:50.163943710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:50.164885 containerd[1483]: time="2025-05-13T23:45:50.164838480Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 13 23:45:50.165572 containerd[1483]: time="2025-05-13T23:45:50.165524296Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:50.168670 containerd[1483]: time="2025-05-13T23:45:50.168638671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:50.170204 containerd[1483]: time="2025-05-13T23:45:50.170172619Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.256690981s" May 13 23:45:50.170253 containerd[1483]: time="2025-05-13T23:45:50.170218899Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 13 23:45:50.170815 containerd[1483]: time="2025-05-13T23:45:50.170648286Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 23:45:51.291810 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:45:51.293312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:45:51.412265 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:45:51.416413 (kubelet)[1940]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:45:51.482881 kubelet[1940]: E0513 23:45:51.482806 1940 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:45:51.485890 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:45:51.486032 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:45:51.486486 systemd[1]: kubelet.service: Consumed 150ms CPU time, 101.5M memory peak. May 13 23:45:51.610732 containerd[1483]: time="2025-05-13T23:45:51.610539629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:51.611944 containerd[1483]: time="2025-05-13T23:45:51.611848609Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 13 23:45:51.613469 containerd[1483]: time="2025-05-13T23:45:51.613413333Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:51.615841 containerd[1483]: time="2025-05-13T23:45:51.615789469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:51.616876 containerd[1483]: time="2025-05-13T23:45:51.616839278Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.446159816s" May 13 23:45:51.616876 containerd[1483]: time="2025-05-13T23:45:51.616874969Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 13 23:45:51.617580 containerd[1483]: time="2025-05-13T23:45:51.617307295Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 23:45:52.776693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2757444883.mount: Deactivated successfully. May 13 23:45:53.172716 containerd[1483]: time="2025-05-13T23:45:53.172252872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:53.173622 containerd[1483]: time="2025-05-13T23:45:53.173475349Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 13 23:45:53.174690 containerd[1483]: time="2025-05-13T23:45:53.174657599Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:53.177546 containerd[1483]: time="2025-05-13T23:45:53.177510803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:53.178188 containerd[1483]: time="2025-05-13T23:45:53.178087382Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.560742593s" May 13 23:45:53.178188 containerd[1483]: time="2025-05-13T23:45:53.178125413Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 13 23:45:53.178846 containerd[1483]: time="2025-05-13T23:45:53.178716319Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 23:45:53.741341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3453651671.mount: Deactivated successfully. May 13 23:45:54.420936 containerd[1483]: time="2025-05-13T23:45:54.420869532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:54.421424 containerd[1483]: time="2025-05-13T23:45:54.421368716Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 13 23:45:54.422179 containerd[1483]: time="2025-05-13T23:45:54.422155595Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:54.424656 containerd[1483]: time="2025-05-13T23:45:54.424597268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:54.425811 containerd[1483]: time="2025-05-13T23:45:54.425779502Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.247030217s" May 13 23:45:54.426001 containerd[1483]: time="2025-05-13T23:45:54.425902093Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 13 23:45:54.426523 containerd[1483]: time="2025-05-13T23:45:54.426496164Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 23:45:54.876706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1790173850.mount: Deactivated successfully. May 13 23:45:54.882379 containerd[1483]: time="2025-05-13T23:45:54.882327035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:45:54.883109 containerd[1483]: time="2025-05-13T23:45:54.883047050Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 13 23:45:54.887213 containerd[1483]: time="2025-05-13T23:45:54.887136690Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:45:54.889415 containerd[1483]: time="2025-05-13T23:45:54.889332941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:45:54.890143 containerd[1483]: time="2025-05-13T23:45:54.889979905Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 463.449251ms" May 13 23:45:54.890143 containerd[1483]: time="2025-05-13T23:45:54.890016391Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 13 23:45:54.890616 containerd[1483]: time="2025-05-13T23:45:54.890590942Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 23:45:55.386206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3566526032.mount: Deactivated successfully. May 13 23:45:57.073490 containerd[1483]: time="2025-05-13T23:45:57.073438466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:57.075048 containerd[1483]: time="2025-05-13T23:45:57.074725233Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 13 23:45:57.075836 containerd[1483]: time="2025-05-13T23:45:57.075806201Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:57.079282 containerd[1483]: time="2025-05-13T23:45:57.079252347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:45:57.080391 containerd[1483]: time="2025-05-13T23:45:57.080360198Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.189737834s" May 13 23:45:57.080614 containerd[1483]: time="2025-05-13T23:45:57.080502165Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 13 23:46:01.700630 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:46:01.702224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:46:01.868067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:46:01.870943 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:46:01.910036 kubelet[2097]: E0513 23:46:01.909974 2097 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:46:01.912661 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:46:01.912835 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:46:01.913326 systemd[1]: kubelet.service: Consumed 140ms CPU time, 102.3M memory peak. May 13 23:46:03.910296 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:46:03.910584 systemd[1]: kubelet.service: Consumed 140ms CPU time, 102.3M memory peak. May 13 23:46:03.918191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:46:03.946827 systemd[1]: Reload requested from client PID 2112 ('systemctl') (unit session-5.scope)... May 13 23:46:03.946850 systemd[1]: Reloading... May 13 23:46:04.039792 zram_generator::config[2162]: No configuration found. May 13 23:46:04.156603 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:46:04.232136 systemd[1]: Reloading finished in 284 ms. May 13 23:46:04.294300 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 23:46:04.294370 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 23:46:04.294596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:46:04.294644 systemd[1]: kubelet.service: Consumed 99ms CPU time, 90.4M memory peak. May 13 23:46:04.298944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:46:04.452161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:46:04.478143 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:46:04.516159 kubelet[2202]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:46:04.516159 kubelet[2202]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 23:46:04.516159 kubelet[2202]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:46:04.516468 kubelet[2202]: I0513 23:46:04.516129 2202 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:46:05.229544 kubelet[2202]: I0513 23:46:05.229091 2202 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 23:46:05.229544 kubelet[2202]: I0513 23:46:05.229130 2202 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:46:05.230050 kubelet[2202]: I0513 23:46:05.229596 2202 server.go:954] "Client rotation is on, will bootstrap in background" May 13 23:46:05.323485 kubelet[2202]: E0513 23:46:05.323430 2202 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" May 13 23:46:05.323706 kubelet[2202]: I0513 23:46:05.323497 2202 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:46:05.338635 kubelet[2202]: I0513 23:46:05.338604 2202 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:46:05.342924 kubelet[2202]: I0513 23:46:05.342881 2202 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:46:05.344750 kubelet[2202]: I0513 23:46:05.344664 2202 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:46:05.344967 kubelet[2202]: I0513 23:46:05.344768 2202 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:46:05.345578 kubelet[2202]: I0513 23:46:05.345553 2202 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:46:05.345578 kubelet[2202]: I0513 23:46:05.345572 2202 container_manager_linux.go:304] "Creating device plugin manager" May 13 23:46:05.346205 kubelet[2202]: I0513 23:46:05.346183 2202 state_mem.go:36] "Initialized new in-memory state store" May 13 23:46:05.359043 kubelet[2202]: I0513 23:46:05.359007 2202 kubelet.go:446] "Attempting to sync node with API server" May 13 23:46:05.359043 kubelet[2202]: I0513 23:46:05.359046 2202 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:46:05.359146 kubelet[2202]: I0513 23:46:05.359076 2202 kubelet.go:352] "Adding apiserver pod source" May 13 23:46:05.359146 kubelet[2202]: I0513 23:46:05.359091 2202 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:46:05.362052 kubelet[2202]: W0513 23:46:05.361995 2202 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused May 13 23:46:05.362093 kubelet[2202]: E0513 23:46:05.362065 2202 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" May 13 23:46:05.363307 kubelet[2202]: W0513 23:46:05.363232 2202 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused May 13 23:46:05.363380 kubelet[2202]: E0513 23:46:05.363314 2202 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" May 13 23:46:05.364322 kubelet[2202]: I0513 23:46:05.364297 2202 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:46:05.365557 kubelet[2202]: I0513 23:46:05.365447 2202 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:46:05.365910 kubelet[2202]: W0513 23:46:05.365896 2202 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:46:05.369086 kubelet[2202]: I0513 23:46:05.368636 2202 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 23:46:05.369086 kubelet[2202]: I0513 23:46:05.368689 2202 server.go:1287] "Started kubelet" May 13 23:46:05.371644 kubelet[2202]: I0513 23:46:05.371533 2202 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:46:05.372620 kubelet[2202]: I0513 23:46:05.372543 2202 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:46:05.373007 kubelet[2202]: I0513 23:46:05.372985 2202 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:46:05.382200 kubelet[2202]: I0513 23:46:05.375482 2202 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:46:05.382200 kubelet[2202]: I0513 23:46:05.376443 2202 server.go:490] "Adding debug handlers to kubelet server" May 13 23:46:05.382200 kubelet[2202]: I0513 23:46:05.378620 2202 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:46:05.382200 kubelet[2202]: E0513 23:46:05.380457 2202 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:46:05.382200 kubelet[2202]: I0513 23:46:05.380502 2202 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 23:46:05.382200 kubelet[2202]: I0513 23:46:05.380722 2202 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:46:05.382200 kubelet[2202]: I0513 23:46:05.380812 2202 reconciler.go:26] "Reconciler: start to sync state" May 13 23:46:05.382200 kubelet[2202]: E0513 23:46:05.380822 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="200ms" May 13 23:46:05.382200 kubelet[2202]: W0513 23:46:05.381764 2202 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused May 13 23:46:05.382200 kubelet[2202]: E0513 23:46:05.381819 2202 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" May 13 23:46:05.383940 kubelet[2202]: E0513 23:46:05.383225 2202 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3aebea7a75bb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 23:46:05.368661435 +0000 UTC m=+0.887139270,LastTimestamp:2025-05-13 23:46:05.368661435 +0000 UTC m=+0.887139270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 23:46:05.384217 kubelet[2202]: I0513 23:46:05.384198 2202 factory.go:221] Registration of the systemd container factory successfully May 13 23:46:05.384403 kubelet[2202]: I0513 23:46:05.384385 2202 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:46:05.385245 kubelet[2202]: E0513 23:46:05.385223 2202 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:46:05.385400 kubelet[2202]: I0513 23:46:05.385383 2202 factory.go:221] Registration of the containerd container factory successfully May 13 23:46:05.393209 kubelet[2202]: I0513 23:46:05.393160 2202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:46:05.394466 kubelet[2202]: I0513 23:46:05.394431 2202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:46:05.394466 kubelet[2202]: I0513 23:46:05.394462 2202 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 23:46:05.394558 kubelet[2202]: I0513 23:46:05.394483 2202 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 23:46:05.394558 kubelet[2202]: I0513 23:46:05.394491 2202 kubelet.go:2388] "Starting kubelet main sync loop" May 13 23:46:05.394558 kubelet[2202]: E0513 23:46:05.394534 2202 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:46:05.399056 kubelet[2202]: W0513 23:46:05.398965 2202 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused May 13 23:46:05.399056 kubelet[2202]: E0513 23:46:05.399028 2202 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" May 13 23:46:05.399368 kubelet[2202]: I0513 23:46:05.399274 2202 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 23:46:05.399368 kubelet[2202]: I0513 23:46:05.399360 2202 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 23:46:05.399433 kubelet[2202]: I0513 23:46:05.399381 2202 state_mem.go:36] "Initialized new in-memory state store" May 13 23:46:05.481613 kubelet[2202]: E0513 23:46:05.481482 2202 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:46:05.491002 kubelet[2202]: I0513 23:46:05.490965 2202 policy_none.go:49] "None policy: Start" May 13 23:46:05.491002 kubelet[2202]: I0513 23:46:05.490997 2202 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 23:46:05.491002 kubelet[2202]: I0513 23:46:05.491011 2202 state_mem.go:35] "Initializing new in-memory state store" May 13 23:46:05.494655 kubelet[2202]: E0513 23:46:05.494620 2202 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 23:46:05.502232 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:46:05.518455 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:46:05.521655 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:46:05.531048 kubelet[2202]: I0513 23:46:05.530638 2202 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:46:05.531048 kubelet[2202]: I0513 23:46:05.530903 2202 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:46:05.531048 kubelet[2202]: I0513 23:46:05.530918 2202 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:46:05.531691 kubelet[2202]: I0513 23:46:05.531160 2202 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:46:05.531857 kubelet[2202]: E0513 23:46:05.531822 2202 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 23:46:05.531915 kubelet[2202]: E0513 23:46:05.531865 2202 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 23:46:05.581721 kubelet[2202]: E0513 23:46:05.581680 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="400ms" May 13 23:46:05.633173 kubelet[2202]: I0513 23:46:05.632790 2202 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:46:05.633173 kubelet[2202]: E0513 23:46:05.633139 2202 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" May 13 23:46:05.705842 systemd[1]: Created slice kubepods-burstable-pod3bbf1b00772deb49c76d045cf619e5c5.slice - libcontainer container kubepods-burstable-pod3bbf1b00772deb49c76d045cf619e5c5.slice. May 13 23:46:05.737591 kubelet[2202]: E0513 23:46:05.737186 2202 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:46:05.741144 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 13 23:46:05.760877 kubelet[2202]: E0513 23:46:05.760846 2202 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:46:05.764011 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 13 23:46:05.765733 kubelet[2202]: E0513 23:46:05.765697 2202 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:46:05.783059 kubelet[2202]: I0513 23:46:05.783020 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:46:05.783156 kubelet[2202]: I0513 23:46:05.783061 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:46:05.783156 kubelet[2202]: I0513 23:46:05.783085 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 23:46:05.783156 kubelet[2202]: I0513 23:46:05.783100 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3bbf1b00772deb49c76d045cf619e5c5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3bbf1b00772deb49c76d045cf619e5c5\") " pod="kube-system/kube-apiserver-localhost" May 13 23:46:05.783156 kubelet[2202]: I0513 23:46:05.783116 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3bbf1b00772deb49c76d045cf619e5c5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3bbf1b00772deb49c76d045cf619e5c5\") " pod="kube-system/kube-apiserver-localhost" May 13 23:46:05.783156 kubelet[2202]: I0513 23:46:05.783133 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:46:05.783255 kubelet[2202]: I0513 23:46:05.783151 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:46:05.783255 kubelet[2202]: I0513 23:46:05.783172 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3bbf1b00772deb49c76d045cf619e5c5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3bbf1b00772deb49c76d045cf619e5c5\") " pod="kube-system/kube-apiserver-localhost" May 13 23:46:05.783255 kubelet[2202]: I0513 23:46:05.783189 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:46:05.835290 kubelet[2202]: I0513 23:46:05.835246 2202 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:46:05.835605 kubelet[2202]: E0513 23:46:05.835583 2202 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" May 13 23:46:05.982907 kubelet[2202]: E0513 23:46:05.982862 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="800ms" May 13 23:46:06.038960 containerd[1483]: time="2025-05-13T23:46:06.038858873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3bbf1b00772deb49c76d045cf619e5c5,Namespace:kube-system,Attempt:0,}" May 13 23:46:06.059558 containerd[1483]: time="2025-05-13T23:46:06.059426018Z" level=info msg="connecting to shim c7ecea53a42cd1dc7dc18707277d41d5eb6ecfe0a374b6449a0d3e375cccb54a" address="unix:///run/containerd/s/8186c1388d80843bedabed3b73d37408b3854b05b6126f8694ce3da9429278fd" namespace=k8s.io protocol=ttrpc version=3 May 13 23:46:06.062858 containerd[1483]: time="2025-05-13T23:46:06.062579368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 13 23:46:06.067456 containerd[1483]: time="2025-05-13T23:46:06.067418508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 13 23:46:06.088261 systemd[1]: Started cri-containerd-c7ecea53a42cd1dc7dc18707277d41d5eb6ecfe0a374b6449a0d3e375cccb54a.scope - libcontainer container c7ecea53a42cd1dc7dc18707277d41d5eb6ecfe0a374b6449a0d3e375cccb54a. May 13 23:46:06.097956 containerd[1483]: time="2025-05-13T23:46:06.097430789Z" level=info msg="connecting to shim e79e08978c8e41d0464b27c4c8f12baf279c358e6a143eb7fe898f30ee76e770" address="unix:///run/containerd/s/34c9440be2253b25804080875e239aa2ea015b2477595b5f8a57a49eccaf4d15" namespace=k8s.io protocol=ttrpc version=3 May 13 23:46:06.115771 containerd[1483]: time="2025-05-13T23:46:06.115694277Z" level=info msg="connecting to shim beeab33d26937e13b76f457a4cff9cf812768b904c9550c575821d2e5dfc9bfe" address="unix:///run/containerd/s/313d1a55dc3d870e15e49d40b7941fda109e7ecc5f3fd7c5d5c5701168a20c14" namespace=k8s.io protocol=ttrpc version=3 May 13 23:46:06.128858 containerd[1483]: time="2025-05-13T23:46:06.128805233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3bbf1b00772deb49c76d045cf619e5c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7ecea53a42cd1dc7dc18707277d41d5eb6ecfe0a374b6449a0d3e375cccb54a\"" May 13 23:46:06.129947 systemd[1]: Started cri-containerd-e79e08978c8e41d0464b27c4c8f12baf279c358e6a143eb7fe898f30ee76e770.scope - libcontainer container e79e08978c8e41d0464b27c4c8f12baf279c358e6a143eb7fe898f30ee76e770. May 13 23:46:06.132135 containerd[1483]: time="2025-05-13T23:46:06.132091249Z" level=info msg="CreateContainer within sandbox \"c7ecea53a42cd1dc7dc18707277d41d5eb6ecfe0a374b6449a0d3e375cccb54a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:46:06.140288 containerd[1483]: time="2025-05-13T23:46:06.140247791Z" level=info msg="Container 8c626e82874aabbe3b6ce7c833cdf12b555684a8cd03b3d04aa8b6e343eef698: CDI devices from CRI Config.CDIDevices: []" May 13 23:46:06.145095 systemd[1]: Started cri-containerd-beeab33d26937e13b76f457a4cff9cf812768b904c9550c575821d2e5dfc9bfe.scope - libcontainer container beeab33d26937e13b76f457a4cff9cf812768b904c9550c575821d2e5dfc9bfe. May 13 23:46:06.148021 containerd[1483]: time="2025-05-13T23:46:06.147955478Z" level=info msg="CreateContainer within sandbox \"c7ecea53a42cd1dc7dc18707277d41d5eb6ecfe0a374b6449a0d3e375cccb54a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8c626e82874aabbe3b6ce7c833cdf12b555684a8cd03b3d04aa8b6e343eef698\"" May 13 23:46:06.148790 containerd[1483]: time="2025-05-13T23:46:06.148758949Z" level=info msg="StartContainer for \"8c626e82874aabbe3b6ce7c833cdf12b555684a8cd03b3d04aa8b6e343eef698\"" May 13 23:46:06.150360 containerd[1483]: time="2025-05-13T23:46:06.150203358Z" level=info msg="connecting to shim 8c626e82874aabbe3b6ce7c833cdf12b555684a8cd03b3d04aa8b6e343eef698" address="unix:///run/containerd/s/8186c1388d80843bedabed3b73d37408b3854b05b6126f8694ce3da9429278fd" protocol=ttrpc version=3 May 13 23:46:06.176822 kubelet[2202]: W0513 23:46:06.176739 2202 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused May 13 23:46:06.176951 kubelet[2202]: E0513 23:46:06.176834 2202 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" May 13 23:46:06.177056 systemd[1]: Started cri-containerd-8c626e82874aabbe3b6ce7c833cdf12b555684a8cd03b3d04aa8b6e343eef698.scope - libcontainer container 8c626e82874aabbe3b6ce7c833cdf12b555684a8cd03b3d04aa8b6e343eef698. May 13 23:46:06.184146 containerd[1483]: time="2025-05-13T23:46:06.184042954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"e79e08978c8e41d0464b27c4c8f12baf279c358e6a143eb7fe898f30ee76e770\"" May 13 23:46:06.186733 containerd[1483]: time="2025-05-13T23:46:06.186679675Z" level=info msg="CreateContainer within sandbox \"e79e08978c8e41d0464b27c4c8f12baf279c358e6a143eb7fe898f30ee76e770\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:46:06.194877 containerd[1483]: time="2025-05-13T23:46:06.194835538Z" level=info msg="Container ec1438995586b3dd03205e75e36adc7b99cac66fdb6d71fcc9960fd303cb4c4a: CDI devices from CRI Config.CDIDevices: []" May 13 23:46:06.196224 containerd[1483]: time="2025-05-13T23:46:06.196177949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"beeab33d26937e13b76f457a4cff9cf812768b904c9550c575821d2e5dfc9bfe\"" May 13 23:46:06.200082 containerd[1483]: time="2025-05-13T23:46:06.200040489Z" level=info msg="CreateContainer within sandbox \"beeab33d26937e13b76f457a4cff9cf812768b904c9550c575821d2e5dfc9bfe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:46:06.203294 containerd[1483]: time="2025-05-13T23:46:06.203238860Z" level=info msg="CreateContainer within sandbox \"e79e08978c8e41d0464b27c4c8f12baf279c358e6a143eb7fe898f30ee76e770\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ec1438995586b3dd03205e75e36adc7b99cac66fdb6d71fcc9960fd303cb4c4a\"" May 13 23:46:06.203904 containerd[1483]: time="2025-05-13T23:46:06.203853489Z" level=info msg="StartContainer for \"ec1438995586b3dd03205e75e36adc7b99cac66fdb6d71fcc9960fd303cb4c4a\"" May 13 23:46:06.205332 containerd[1483]: time="2025-05-13T23:46:06.205255955Z" level=info msg="connecting to shim ec1438995586b3dd03205e75e36adc7b99cac66fdb6d71fcc9960fd303cb4c4a" address="unix:///run/containerd/s/34c9440be2253b25804080875e239aa2ea015b2477595b5f8a57a49eccaf4d15" protocol=ttrpc version=3 May 13 23:46:06.207698 containerd[1483]: time="2025-05-13T23:46:06.207599716Z" level=info msg="Container 5c62067a3a222b435f4ae57a97cca7953d4eaf3faa63379adafb83a76114fbc0: CDI devices from CRI Config.CDIDevices: []" May 13 23:46:06.218520 containerd[1483]: time="2025-05-13T23:46:06.218470788Z" level=info msg="CreateContainer within sandbox \"beeab33d26937e13b76f457a4cff9cf812768b904c9550c575821d2e5dfc9bfe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5c62067a3a222b435f4ae57a97cca7953d4eaf3faa63379adafb83a76114fbc0\"" May 13 23:46:06.219156 containerd[1483]: time="2025-05-13T23:46:06.219128399Z" level=info msg="StartContainer for \"5c62067a3a222b435f4ae57a97cca7953d4eaf3faa63379adafb83a76114fbc0\"" May 13 23:46:06.220589 containerd[1483]: time="2025-05-13T23:46:06.220317513Z" level=info msg="connecting to shim 5c62067a3a222b435f4ae57a97cca7953d4eaf3faa63379adafb83a76114fbc0" address="unix:///run/containerd/s/313d1a55dc3d870e15e49d40b7941fda109e7ecc5f3fd7c5d5c5701168a20c14" protocol=ttrpc version=3 May 13 23:46:06.235870 systemd[1]: Started cri-containerd-ec1438995586b3dd03205e75e36adc7b99cac66fdb6d71fcc9960fd303cb4c4a.scope - libcontainer container ec1438995586b3dd03205e75e36adc7b99cac66fdb6d71fcc9960fd303cb4c4a. May 13 23:46:06.240052 containerd[1483]: time="2025-05-13T23:46:06.240010936Z" level=info msg="StartContainer for \"8c626e82874aabbe3b6ce7c833cdf12b555684a8cd03b3d04aa8b6e343eef698\" returns successfully" May 13 23:46:06.241056 kubelet[2202]: I0513 23:46:06.241029 2202 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:46:06.241375 kubelet[2202]: E0513 23:46:06.241350 2202 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" May 13 23:46:06.248363 systemd[1]: Started cri-containerd-5c62067a3a222b435f4ae57a97cca7953d4eaf3faa63379adafb83a76114fbc0.scope - libcontainer container 5c62067a3a222b435f4ae57a97cca7953d4eaf3faa63379adafb83a76114fbc0. May 13 23:46:06.293933 containerd[1483]: time="2025-05-13T23:46:06.293581738Z" level=info msg="StartContainer for \"ec1438995586b3dd03205e75e36adc7b99cac66fdb6d71fcc9960fd303cb4c4a\" returns successfully" May 13 23:46:06.308058 containerd[1483]: time="2025-05-13T23:46:06.307957977Z" level=info msg="StartContainer for \"5c62067a3a222b435f4ae57a97cca7953d4eaf3faa63379adafb83a76114fbc0\" returns successfully" May 13 23:46:06.320452 kubelet[2202]: W0513 23:46:06.320331 2202 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused May 13 23:46:06.320452 kubelet[2202]: E0513 23:46:06.320415 2202 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" May 13 23:46:06.393316 kubelet[2202]: W0513 23:46:06.393247 2202 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused May 13 23:46:06.393316 kubelet[2202]: E0513 23:46:06.393315 2202 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" May 13 23:46:06.405056 kubelet[2202]: E0513 23:46:06.404894 2202 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:46:06.408771 kubelet[2202]: E0513 23:46:06.408649 2202 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:46:06.411835 kubelet[2202]: E0513 23:46:06.411801 2202 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:46:07.043513 kubelet[2202]: I0513 23:46:07.043471 2202 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:46:07.414183 kubelet[2202]: E0513 23:46:07.414085 2202 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:46:07.414848 kubelet[2202]: E0513 23:46:07.414828 2202 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:46:08.030868 kubelet[2202]: E0513 23:46:08.030833 2202 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 23:46:08.118089 kubelet[2202]: I0513 23:46:08.117953 2202 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 23:46:08.118089 kubelet[2202]: E0513 23:46:08.117996 2202 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 13 23:46:08.133390 kubelet[2202]: E0513 23:46:08.133335 2202 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:46:08.233652 kubelet[2202]: E0513 23:46:08.233589 2202 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:46:08.334457 kubelet[2202]: E0513 23:46:08.334333 2202 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:46:08.480975 kubelet[2202]: I0513 23:46:08.480927 2202 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 23:46:08.489312 kubelet[2202]: E0513 23:46:08.489265 2202 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 23:46:08.489312 kubelet[2202]: I0513 23:46:08.489301 2202 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 23:46:08.491391 kubelet[2202]: E0513 23:46:08.491361 2202 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 13 23:46:08.491391 kubelet[2202]: I0513 23:46:08.491391 2202 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 23:46:08.492977 kubelet[2202]: E0513 23:46:08.492955 2202 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 23:46:09.365113 kubelet[2202]: I0513 23:46:09.365074 2202 apiserver.go:52] "Watching apiserver" May 13 23:46:09.381689 kubelet[2202]: I0513 23:46:09.381636 2202 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:46:10.434993 systemd[1]: Reload requested from client PID 2476 ('systemctl') (unit session-5.scope)... May 13 23:46:10.435015 systemd[1]: Reloading... May 13 23:46:10.508790 zram_generator::config[2526]: No configuration found. May 13 23:46:10.590203 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:46:10.676913 systemd[1]: Reloading finished in 241 ms. May 13 23:46:10.696044 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:46:10.711700 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:46:10.712051 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:46:10.712134 systemd[1]: kubelet.service: Consumed 1.389s CPU time, 128.1M memory peak. May 13 23:46:10.714246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:46:10.840102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:46:10.844821 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:46:10.887779 kubelet[2562]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:46:10.887779 kubelet[2562]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 23:46:10.887779 kubelet[2562]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:46:10.887779 kubelet[2562]: I0513 23:46:10.886509 2562 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:46:10.895199 kubelet[2562]: I0513 23:46:10.895162 2562 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 23:46:10.895199 kubelet[2562]: I0513 23:46:10.895190 2562 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:46:10.895812 kubelet[2562]: I0513 23:46:10.895784 2562 server.go:954] "Client rotation is on, will bootstrap in background" May 13 23:46:10.897236 kubelet[2562]: I0513 23:46:10.897209 2562 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:46:10.899442 kubelet[2562]: I0513 23:46:10.899413 2562 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:46:10.903014 kubelet[2562]: I0513 23:46:10.902991 2562 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:46:10.905659 kubelet[2562]: I0513 23:46:10.905622 2562 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:46:10.905890 kubelet[2562]: I0513 23:46:10.905861 2562 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:46:10.906048 kubelet[2562]: I0513 23:46:10.905893 2562 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:46:10.906123 kubelet[2562]: I0513 23:46:10.906059 2562 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:46:10.906123 kubelet[2562]: I0513 23:46:10.906068 2562 container_manager_linux.go:304] "Creating device plugin manager" May 13 23:46:10.906123 kubelet[2562]: I0513 23:46:10.906109 2562 state_mem.go:36] "Initialized new in-memory state store" May 13 23:46:10.906249 kubelet[2562]: I0513 23:46:10.906237 2562 kubelet.go:446] "Attempting to sync node with API server" May 13 23:46:10.906278 kubelet[2562]: I0513 23:46:10.906251 2562 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:46:10.906278 kubelet[2562]: I0513 23:46:10.906273 2562 kubelet.go:352] "Adding apiserver pod source" May 13 23:46:10.906409 kubelet[2562]: I0513 23:46:10.906282 2562 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:46:10.906846 kubelet[2562]: I0513 23:46:10.906825 2562 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:46:10.909770 kubelet[2562]: I0513 23:46:10.908104 2562 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:46:10.909770 kubelet[2562]: I0513 23:46:10.908578 2562 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 23:46:10.909770 kubelet[2562]: I0513 23:46:10.908610 2562 server.go:1287] "Started kubelet" May 13 23:46:10.911204 kubelet[2562]: I0513 23:46:10.911164 2562 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:46:10.911882 kubelet[2562]: I0513 23:46:10.911853 2562 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:46:10.912300 kubelet[2562]: I0513 23:46:10.912270 2562 server.go:490] "Adding debug handlers to kubelet server" May 13 23:46:10.912759 kubelet[2562]: I0513 23:46:10.912682 2562 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:46:10.913109 kubelet[2562]: I0513 23:46:10.913075 2562 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:46:10.913244 kubelet[2562]: I0513 23:46:10.913217 2562 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:46:10.914237 kubelet[2562]: I0513 23:46:10.914184 2562 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 23:46:10.914302 kubelet[2562]: E0513 23:46:10.914284 2562 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:46:10.915123 kubelet[2562]: I0513 23:46:10.914833 2562 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:46:10.915123 kubelet[2562]: I0513 23:46:10.914964 2562 reconciler.go:26] "Reconciler: start to sync state" May 13 23:46:10.917128 kubelet[2562]: I0513 23:46:10.916890 2562 factory.go:221] Registration of the systemd container factory successfully May 13 23:46:10.917128 kubelet[2562]: I0513 23:46:10.917018 2562 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:46:10.923781 kubelet[2562]: E0513 23:46:10.923311 2562 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:46:10.924403 kubelet[2562]: I0513 23:46:10.924382 2562 factory.go:221] Registration of the containerd container factory successfully May 13 23:46:10.941885 kubelet[2562]: I0513 23:46:10.941817 2562 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:46:10.943199 kubelet[2562]: I0513 23:46:10.943174 2562 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:46:10.943199 kubelet[2562]: I0513 23:46:10.943192 2562 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 23:46:10.943293 kubelet[2562]: I0513 23:46:10.943208 2562 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 23:46:10.943293 kubelet[2562]: I0513 23:46:10.943215 2562 kubelet.go:2388] "Starting kubelet main sync loop" May 13 23:46:10.943293 kubelet[2562]: E0513 23:46:10.943249 2562 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:46:10.970038 kubelet[2562]: I0513 23:46:10.970006 2562 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 23:46:10.970038 kubelet[2562]: I0513 23:46:10.970026 2562 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 23:46:10.970038 kubelet[2562]: I0513 23:46:10.970046 2562 state_mem.go:36] "Initialized new in-memory state store" May 13 23:46:10.970211 kubelet[2562]: I0513 23:46:10.970183 2562 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:46:10.970211 kubelet[2562]: I0513 23:46:10.970194 2562 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:46:10.970211 kubelet[2562]: I0513 23:46:10.970211 2562 policy_none.go:49] "None policy: Start" May 13 23:46:10.970269 kubelet[2562]: I0513 23:46:10.970219 2562 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 23:46:10.970269 kubelet[2562]: I0513 23:46:10.970228 2562 state_mem.go:35] "Initializing new in-memory state store" May 13 23:46:10.970339 kubelet[2562]: I0513 23:46:10.970323 2562 state_mem.go:75] "Updated machine memory state" May 13 23:46:10.973951 kubelet[2562]: I0513 23:46:10.973874 2562 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:46:10.974079 kubelet[2562]: I0513 23:46:10.974038 2562 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:46:10.974210 kubelet[2562]: I0513 23:46:10.974083 2562 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:46:10.974822 kubelet[2562]: I0513 23:46:10.974322 2562 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:46:10.975601 kubelet[2562]: E0513 23:46:10.975586 2562 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 23:46:11.044269 kubelet[2562]: I0513 23:46:11.044230 2562 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 23:46:11.044411 kubelet[2562]: I0513 23:46:11.044358 2562 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 23:46:11.044694 kubelet[2562]: I0513 23:46:11.044228 2562 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 23:46:11.077400 kubelet[2562]: I0513 23:46:11.077365 2562 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:46:11.087902 kubelet[2562]: I0513 23:46:11.087814 2562 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 13 23:46:11.088031 kubelet[2562]: I0513 23:46:11.087952 2562 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 23:46:11.216796 kubelet[2562]: I0513 23:46:11.216741 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3bbf1b00772deb49c76d045cf619e5c5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3bbf1b00772deb49c76d045cf619e5c5\") " pod="kube-system/kube-apiserver-localhost" May 13 23:46:11.217170 kubelet[2562]: I0513 23:46:11.216972 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:46:11.217170 kubelet[2562]: I0513 23:46:11.217000 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:46:11.217170 kubelet[2562]: I0513 23:46:11.217017 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 23:46:11.217170 kubelet[2562]: I0513 23:46:11.217035 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3bbf1b00772deb49c76d045cf619e5c5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3bbf1b00772deb49c76d045cf619e5c5\") " pod="kube-system/kube-apiserver-localhost" May 13 23:46:11.217170 kubelet[2562]: I0513 23:46:11.217051 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3bbf1b00772deb49c76d045cf619e5c5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3bbf1b00772deb49c76d045cf619e5c5\") " pod="kube-system/kube-apiserver-localhost" May 13 23:46:11.217334 kubelet[2562]: I0513 23:46:11.217068 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:46:11.217334 kubelet[2562]: I0513 23:46:11.217084 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:46:11.217334 kubelet[2562]: I0513 23:46:11.217100 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:46:11.907758 kubelet[2562]: I0513 23:46:11.907373 2562 apiserver.go:52] "Watching apiserver" May 13 23:46:11.915772 kubelet[2562]: I0513 23:46:11.915718 2562 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:46:11.961352 kubelet[2562]: I0513 23:46:11.959166 2562 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 23:46:11.961352 kubelet[2562]: I0513 23:46:11.959487 2562 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 23:46:11.961352 kubelet[2562]: I0513 23:46:11.959568 2562 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 23:46:11.970113 kubelet[2562]: E0513 23:46:11.970071 2562 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 23:46:11.971431 kubelet[2562]: E0513 23:46:11.970048 2562 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 23:46:11.971431 kubelet[2562]: E0513 23:46:11.971144 2562 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 23:46:11.986024 kubelet[2562]: I0513 23:46:11.985861 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.985823798 podStartE2EDuration="985.823798ms" podCreationTimestamp="2025-05-13 23:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:46:11.985621681 +0000 UTC m=+1.137610899" watchObservedRunningTime="2025-05-13 23:46:11.985823798 +0000 UTC m=+1.137813016" May 13 23:46:12.000881 kubelet[2562]: I0513 23:46:12.000561 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.000543607 podStartE2EDuration="1.000543607s" podCreationTimestamp="2025-05-13 23:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:46:11.992670531 +0000 UTC m=+1.144659709" watchObservedRunningTime="2025-05-13 23:46:12.000543607 +0000 UTC m=+1.152532825" May 13 23:46:12.009567 kubelet[2562]: I0513 23:46:12.009515 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.009497954 podStartE2EDuration="1.009497954s" podCreationTimestamp="2025-05-13 23:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:46:12.001028839 +0000 UTC m=+1.153018097" watchObservedRunningTime="2025-05-13 23:46:12.009497954 +0000 UTC m=+1.161487172" May 13 23:46:12.202583 sudo[1644]: pam_unix(sudo:session): session closed for user root May 13 23:46:12.205794 sshd[1643]: Connection closed by 10.0.0.1 port 53480 May 13 23:46:12.206289 sshd-session[1640]: pam_unix(sshd:session): session closed for user core May 13 23:46:12.209594 systemd[1]: sshd@4-10.0.0.67:22-10.0.0.1:53480.service: Deactivated successfully. May 13 23:46:12.211521 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:46:12.211710 systemd[1]: session-5.scope: Consumed 8.336s CPU time, 227.5M memory peak. May 13 23:46:12.212631 systemd-logind[1474]: Session 5 logged out. Waiting for processes to exit. May 13 23:46:12.213540 systemd-logind[1474]: Removed session 5. May 13 23:46:15.901875 kubelet[2562]: I0513 23:46:15.901336 2562 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:46:15.901875 kubelet[2562]: I0513 23:46:15.901809 2562 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:46:15.902292 containerd[1483]: time="2025-05-13T23:46:15.901622429Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:46:16.703475 systemd[1]: Created slice kubepods-besteffort-pod456aa211_ecce_4169_b6ec_f34d1cfec2a0.slice - libcontainer container kubepods-besteffort-pod456aa211_ecce_4169_b6ec_f34d1cfec2a0.slice. May 13 23:46:16.717918 systemd[1]: Created slice kubepods-burstable-pod2fb8ca46_a13a_4198_9c6b_6fca1942536c.slice - libcontainer container kubepods-burstable-pod2fb8ca46_a13a_4198_9c6b_6fca1942536c.slice. May 13 23:46:16.753711 kubelet[2562]: I0513 23:46:16.753658 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/456aa211-ecce-4169-b6ec-f34d1cfec2a0-lib-modules\") pod \"kube-proxy-qwmtj\" (UID: \"456aa211-ecce-4169-b6ec-f34d1cfec2a0\") " pod="kube-system/kube-proxy-qwmtj" May 13 23:46:16.753711 kubelet[2562]: I0513 23:46:16.753704 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/2fb8ca46-a13a-4198-9c6b-6fca1942536c-cni\") pod \"kube-flannel-ds-k9v68\" (UID: \"2fb8ca46-a13a-4198-9c6b-6fca1942536c\") " pod="kube-flannel/kube-flannel-ds-k9v68" May 13 23:46:16.753711 kubelet[2562]: I0513 23:46:16.753723 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2fb8ca46-a13a-4198-9c6b-6fca1942536c-run\") pod \"kube-flannel-ds-k9v68\" (UID: \"2fb8ca46-a13a-4198-9c6b-6fca1942536c\") " pod="kube-flannel/kube-flannel-ds-k9v68" May 13 23:46:16.753914 kubelet[2562]: I0513 23:46:16.753738 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/2fb8ca46-a13a-4198-9c6b-6fca1942536c-flannel-cfg\") pod \"kube-flannel-ds-k9v68\" (UID: \"2fb8ca46-a13a-4198-9c6b-6fca1942536c\") " pod="kube-flannel/kube-flannel-ds-k9v68" May 13 23:46:16.753914 kubelet[2562]: I0513 23:46:16.753775 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rksh\" (UniqueName: \"kubernetes.io/projected/2fb8ca46-a13a-4198-9c6b-6fca1942536c-kube-api-access-7rksh\") pod \"kube-flannel-ds-k9v68\" (UID: \"2fb8ca46-a13a-4198-9c6b-6fca1942536c\") " pod="kube-flannel/kube-flannel-ds-k9v68" May 13 23:46:16.753914 kubelet[2562]: I0513 23:46:16.753792 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/456aa211-ecce-4169-b6ec-f34d1cfec2a0-kube-proxy\") pod \"kube-proxy-qwmtj\" (UID: \"456aa211-ecce-4169-b6ec-f34d1cfec2a0\") " pod="kube-system/kube-proxy-qwmtj" May 13 23:46:16.753914 kubelet[2562]: I0513 23:46:16.753807 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-887h9\" (UniqueName: \"kubernetes.io/projected/456aa211-ecce-4169-b6ec-f34d1cfec2a0-kube-api-access-887h9\") pod \"kube-proxy-qwmtj\" (UID: \"456aa211-ecce-4169-b6ec-f34d1cfec2a0\") " pod="kube-system/kube-proxy-qwmtj" May 13 23:46:16.753914 kubelet[2562]: I0513 23:46:16.753823 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/2fb8ca46-a13a-4198-9c6b-6fca1942536c-cni-plugin\") pod \"kube-flannel-ds-k9v68\" (UID: \"2fb8ca46-a13a-4198-9c6b-6fca1942536c\") " pod="kube-flannel/kube-flannel-ds-k9v68" May 13 23:46:16.754017 kubelet[2562]: I0513 23:46:16.753838 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/456aa211-ecce-4169-b6ec-f34d1cfec2a0-xtables-lock\") pod \"kube-proxy-qwmtj\" (UID: \"456aa211-ecce-4169-b6ec-f34d1cfec2a0\") " pod="kube-system/kube-proxy-qwmtj" May 13 23:46:16.754017 kubelet[2562]: I0513 23:46:16.753853 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fb8ca46-a13a-4198-9c6b-6fca1942536c-xtables-lock\") pod \"kube-flannel-ds-k9v68\" (UID: \"2fb8ca46-a13a-4198-9c6b-6fca1942536c\") " pod="kube-flannel/kube-flannel-ds-k9v68" May 13 23:46:17.016701 containerd[1483]: time="2025-05-13T23:46:17.016335780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qwmtj,Uid:456aa211-ecce-4169-b6ec-f34d1cfec2a0,Namespace:kube-system,Attempt:0,}" May 13 23:46:17.021863 containerd[1483]: time="2025-05-13T23:46:17.021589801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-k9v68,Uid:2fb8ca46-a13a-4198-9c6b-6fca1942536c,Namespace:kube-flannel,Attempt:0,}" May 13 23:46:17.038481 containerd[1483]: time="2025-05-13T23:46:17.038439131Z" level=info msg="connecting to shim edd48f26c5deec49d6e4b98456d78c5517af1d3ed3813937c5b3d742905954d8" address="unix:///run/containerd/s/77488519b2fcfe32629a224d7f6b36685b2cbd3bb311af524137ce02273bf8f5" namespace=k8s.io protocol=ttrpc version=3 May 13 23:46:17.044018 containerd[1483]: time="2025-05-13T23:46:17.043964149Z" level=info msg="connecting to shim b91247e92629a1a065199ca849689cc6032d0fe50ae4aaef99bbae9513a652e0" address="unix:///run/containerd/s/5b4933de32af26c0111fe097d8576d3fdd3aff72c264ef11ae72d364eb7a19e3" namespace=k8s.io protocol=ttrpc version=3 May 13 23:46:17.062936 systemd[1]: Started cri-containerd-edd48f26c5deec49d6e4b98456d78c5517af1d3ed3813937c5b3d742905954d8.scope - libcontainer container edd48f26c5deec49d6e4b98456d78c5517af1d3ed3813937c5b3d742905954d8. May 13 23:46:17.065998 systemd[1]: Started cri-containerd-b91247e92629a1a065199ca849689cc6032d0fe50ae4aaef99bbae9513a652e0.scope - libcontainer container b91247e92629a1a065199ca849689cc6032d0fe50ae4aaef99bbae9513a652e0. May 13 23:46:17.091057 containerd[1483]: time="2025-05-13T23:46:17.091010101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qwmtj,Uid:456aa211-ecce-4169-b6ec-f34d1cfec2a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"edd48f26c5deec49d6e4b98456d78c5517af1d3ed3813937c5b3d742905954d8\"" May 13 23:46:17.095171 containerd[1483]: time="2025-05-13T23:46:17.095128574Z" level=info msg="CreateContainer within sandbox \"edd48f26c5deec49d6e4b98456d78c5517af1d3ed3813937c5b3d742905954d8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:46:17.105106 containerd[1483]: time="2025-05-13T23:46:17.104999463Z" level=info msg="Container b27abece79185e4b50dd4843b331283da2f81bea18096fdf60aae3b78af37b76: CDI devices from CRI Config.CDIDevices: []" May 13 23:46:17.105599 containerd[1483]: time="2025-05-13T23:46:17.105557937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-k9v68,Uid:2fb8ca46-a13a-4198-9c6b-6fca1942536c,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"b91247e92629a1a065199ca849689cc6032d0fe50ae4aaef99bbae9513a652e0\"" May 13 23:46:17.107176 containerd[1483]: time="2025-05-13T23:46:17.107143639Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 13 23:46:17.111950 containerd[1483]: time="2025-05-13T23:46:17.111901066Z" level=info msg="CreateContainer within sandbox \"edd48f26c5deec49d6e4b98456d78c5517af1d3ed3813937c5b3d742905954d8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b27abece79185e4b50dd4843b331283da2f81bea18096fdf60aae3b78af37b76\"" May 13 23:46:17.113189 containerd[1483]: time="2025-05-13T23:46:17.112666417Z" level=info msg="StartContainer for \"b27abece79185e4b50dd4843b331283da2f81bea18096fdf60aae3b78af37b76\"" May 13 23:46:17.114126 containerd[1483]: time="2025-05-13T23:46:17.114079321Z" level=info msg="connecting to shim b27abece79185e4b50dd4843b331283da2f81bea18096fdf60aae3b78af37b76" address="unix:///run/containerd/s/77488519b2fcfe32629a224d7f6b36685b2cbd3bb311af524137ce02273bf8f5" protocol=ttrpc version=3 May 13 23:46:17.132925 systemd[1]: Started cri-containerd-b27abece79185e4b50dd4843b331283da2f81bea18096fdf60aae3b78af37b76.scope - libcontainer container b27abece79185e4b50dd4843b331283da2f81bea18096fdf60aae3b78af37b76. May 13 23:46:17.164210 containerd[1483]: time="2025-05-13T23:46:17.163605245Z" level=info msg="StartContainer for \"b27abece79185e4b50dd4843b331283da2f81bea18096fdf60aae3b78af37b76\" returns successfully" May 13 23:46:17.982603 kubelet[2562]: I0513 23:46:17.982512 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qwmtj" podStartSLOduration=1.9824924830000001 podStartE2EDuration="1.982492483s" podCreationTimestamp="2025-05-13 23:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:46:17.982440404 +0000 UTC m=+7.134429622" watchObservedRunningTime="2025-05-13 23:46:17.982492483 +0000 UTC m=+7.134481701" May 13 23:46:18.231949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3536940019.mount: Deactivated successfully. May 13 23:46:18.257732 containerd[1483]: time="2025-05-13T23:46:18.257619023Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:46:18.258626 containerd[1483]: time="2025-05-13T23:46:18.258505214Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" May 13 23:46:18.259389 containerd[1483]: time="2025-05-13T23:46:18.259344725Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:46:18.264471 containerd[1483]: time="2025-05-13T23:46:18.261258744Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:46:18.264471 containerd[1483]: time="2025-05-13T23:46:18.262148015Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.154968896s" May 13 23:46:18.264471 containerd[1483]: time="2025-05-13T23:46:18.262216094Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" May 13 23:46:18.264471 containerd[1483]: time="2025-05-13T23:46:18.264414991Z" level=info msg="CreateContainer within sandbox \"b91247e92629a1a065199ca849689cc6032d0fe50ae4aaef99bbae9513a652e0\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 13 23:46:18.274906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1063407315.mount: Deactivated successfully. May 13 23:46:18.275673 containerd[1483]: time="2025-05-13T23:46:18.275619472Z" level=info msg="Container b71c0774d4b96abe5fce6aecd4623d35074867fd9fcc2fb86421e43eaf46e17b: CDI devices from CRI Config.CDIDevices: []" May 13 23:46:18.281466 containerd[1483]: time="2025-05-13T23:46:18.281432130Z" level=info msg="CreateContainer within sandbox \"b91247e92629a1a065199ca849689cc6032d0fe50ae4aaef99bbae9513a652e0\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"b71c0774d4b96abe5fce6aecd4623d35074867fd9fcc2fb86421e43eaf46e17b\"" May 13 23:46:18.282078 containerd[1483]: time="2025-05-13T23:46:18.282032283Z" level=info msg="StartContainer for \"b71c0774d4b96abe5fce6aecd4623d35074867fd9fcc2fb86421e43eaf46e17b\"" May 13 23:46:18.282995 containerd[1483]: time="2025-05-13T23:46:18.282942394Z" level=info msg="connecting to shim b71c0774d4b96abe5fce6aecd4623d35074867fd9fcc2fb86421e43eaf46e17b" address="unix:///run/containerd/s/5b4933de32af26c0111fe097d8576d3fdd3aff72c264ef11ae72d364eb7a19e3" protocol=ttrpc version=3 May 13 23:46:18.302935 systemd[1]: Started cri-containerd-b71c0774d4b96abe5fce6aecd4623d35074867fd9fcc2fb86421e43eaf46e17b.scope - libcontainer container b71c0774d4b96abe5fce6aecd4623d35074867fd9fcc2fb86421e43eaf46e17b. May 13 23:46:18.330223 containerd[1483]: time="2025-05-13T23:46:18.330136731Z" level=info msg="StartContainer for \"b71c0774d4b96abe5fce6aecd4623d35074867fd9fcc2fb86421e43eaf46e17b\" returns successfully" May 13 23:46:18.336918 systemd[1]: cri-containerd-b71c0774d4b96abe5fce6aecd4623d35074867fd9fcc2fb86421e43eaf46e17b.scope: Deactivated successfully. May 13 23:46:18.339612 containerd[1483]: time="2025-05-13T23:46:18.339574711Z" level=info msg="received exit event container_id:\"b71c0774d4b96abe5fce6aecd4623d35074867fd9fcc2fb86421e43eaf46e17b\" id:\"b71c0774d4b96abe5fce6aecd4623d35074867fd9fcc2fb86421e43eaf46e17b\" pid:2900 exited_at:{seconds:1747179978 nanos:339184475}" May 13 23:46:18.339778 containerd[1483]: time="2025-05-13T23:46:18.339659710Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b71c0774d4b96abe5fce6aecd4623d35074867fd9fcc2fb86421e43eaf46e17b\" id:\"b71c0774d4b96abe5fce6aecd4623d35074867fd9fcc2fb86421e43eaf46e17b\" pid:2900 exited_at:{seconds:1747179978 nanos:339184475}" May 13 23:46:18.982443 containerd[1483]: time="2025-05-13T23:46:18.982402429Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 13 23:46:20.187795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount539324394.mount: Deactivated successfully. May 13 23:46:20.709800 containerd[1483]: time="2025-05-13T23:46:20.709562245Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:46:20.710739 containerd[1483]: time="2025-05-13T23:46:20.710678435Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" May 13 23:46:20.712623 containerd[1483]: time="2025-05-13T23:46:20.712542417Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:46:20.715897 containerd[1483]: time="2025-05-13T23:46:20.715854025Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:46:20.717142 containerd[1483]: time="2025-05-13T23:46:20.716901495Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.734031711s" May 13 23:46:20.717142 containerd[1483]: time="2025-05-13T23:46:20.716939615Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" May 13 23:46:20.719719 containerd[1483]: time="2025-05-13T23:46:20.719026235Z" level=info msg="CreateContainer within sandbox \"b91247e92629a1a065199ca849689cc6032d0fe50ae4aaef99bbae9513a652e0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 23:46:20.724533 containerd[1483]: time="2025-05-13T23:46:20.724484462Z" level=info msg="Container 651567dff1ac967f5756c457425545813a620aca66e6a613d09210bc081bef91: CDI devices from CRI Config.CDIDevices: []" May 13 23:46:20.731495 containerd[1483]: time="2025-05-13T23:46:20.731453836Z" level=info msg="CreateContainer within sandbox \"b91247e92629a1a065199ca849689cc6032d0fe50ae4aaef99bbae9513a652e0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"651567dff1ac967f5756c457425545813a620aca66e6a613d09210bc081bef91\"" May 13 23:46:20.732234 containerd[1483]: time="2025-05-13T23:46:20.732204389Z" level=info msg="StartContainer for \"651567dff1ac967f5756c457425545813a620aca66e6a613d09210bc081bef91\"" May 13 23:46:20.733099 containerd[1483]: time="2025-05-13T23:46:20.733067940Z" level=info msg="connecting to shim 651567dff1ac967f5756c457425545813a620aca66e6a613d09210bc081bef91" address="unix:///run/containerd/s/5b4933de32af26c0111fe097d8576d3fdd3aff72c264ef11ae72d364eb7a19e3" protocol=ttrpc version=3 May 13 23:46:20.755961 systemd[1]: Started cri-containerd-651567dff1ac967f5756c457425545813a620aca66e6a613d09210bc081bef91.scope - libcontainer container 651567dff1ac967f5756c457425545813a620aca66e6a613d09210bc081bef91. May 13 23:46:20.805343 systemd[1]: cri-containerd-651567dff1ac967f5756c457425545813a620aca66e6a613d09210bc081bef91.scope: Deactivated successfully. May 13 23:46:20.805874 containerd[1483]: time="2025-05-13T23:46:20.805838004Z" level=info msg="TaskExit event in podsandbox handler container_id:\"651567dff1ac967f5756c457425545813a620aca66e6a613d09210bc081bef91\" id:\"651567dff1ac967f5756c457425545813a620aca66e6a613d09210bc081bef91\" pid:2977 exited_at:{seconds:1747179980 nanos:805470088}" May 13 23:46:20.806060 systemd[1]: cri-containerd-651567dff1ac967f5756c457425545813a620aca66e6a613d09210bc081bef91.scope: Consumed 19ms CPU time, 6.8M memory peak, 2.1M read from disk. May 13 23:46:20.847848 containerd[1483]: time="2025-05-13T23:46:20.847788403Z" level=info msg="received exit event container_id:\"651567dff1ac967f5756c457425545813a620aca66e6a613d09210bc081bef91\" id:\"651567dff1ac967f5756c457425545813a620aca66e6a613d09210bc081bef91\" pid:2977 exited_at:{seconds:1747179980 nanos:805470088}" May 13 23:46:20.850892 containerd[1483]: time="2025-05-13T23:46:20.850708495Z" level=info msg="StartContainer for \"651567dff1ac967f5756c457425545813a620aca66e6a613d09210bc081bef91\" returns successfully" May 13 23:46:20.869070 kubelet[2562]: I0513 23:46:20.869036 2562 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 23:46:20.905306 systemd[1]: Created slice kubepods-burstable-poda787ab7d_0f43_4385_86f0_7cdfb593eb8f.slice - libcontainer container kubepods-burstable-poda787ab7d_0f43_4385_86f0_7cdfb593eb8f.slice. May 13 23:46:20.914261 systemd[1]: Created slice kubepods-burstable-pod947b7408_4ee3_455d_a05e_a598ae98451a.slice - libcontainer container kubepods-burstable-pod947b7408_4ee3_455d_a05e_a598ae98451a.slice. May 13 23:46:20.981123 kubelet[2562]: I0513 23:46:20.981013 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhfjk\" (UniqueName: \"kubernetes.io/projected/a787ab7d-0f43-4385-86f0-7cdfb593eb8f-kube-api-access-jhfjk\") pod \"coredns-668d6bf9bc-lmt2g\" (UID: \"a787ab7d-0f43-4385-86f0-7cdfb593eb8f\") " pod="kube-system/coredns-668d6bf9bc-lmt2g" May 13 23:46:20.981440 kubelet[2562]: I0513 23:46:20.981299 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/947b7408-4ee3-455d-a05e-a598ae98451a-config-volume\") pod \"coredns-668d6bf9bc-f744l\" (UID: \"947b7408-4ee3-455d-a05e-a598ae98451a\") " pod="kube-system/coredns-668d6bf9bc-f744l" May 13 23:46:20.981440 kubelet[2562]: I0513 23:46:20.981336 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz6st\" (UniqueName: \"kubernetes.io/projected/947b7408-4ee3-455d-a05e-a598ae98451a-kube-api-access-cz6st\") pod \"coredns-668d6bf9bc-f744l\" (UID: \"947b7408-4ee3-455d-a05e-a598ae98451a\") " pod="kube-system/coredns-668d6bf9bc-f744l" May 13 23:46:20.981440 kubelet[2562]: I0513 23:46:20.981372 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a787ab7d-0f43-4385-86f0-7cdfb593eb8f-config-volume\") pod \"coredns-668d6bf9bc-lmt2g\" (UID: \"a787ab7d-0f43-4385-86f0-7cdfb593eb8f\") " pod="kube-system/coredns-668d6bf9bc-lmt2g" May 13 23:46:20.984694 containerd[1483]: time="2025-05-13T23:46:20.984655694Z" level=info msg="CreateContainer within sandbox \"b91247e92629a1a065199ca849689cc6032d0fe50ae4aaef99bbae9513a652e0\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 13 23:46:20.994137 containerd[1483]: time="2025-05-13T23:46:20.994086083Z" level=info msg="Container 948cafca2159e61fead1ac814a5b6611402fbc5204ac3095f027b7eb24383fe1: CDI devices from CRI Config.CDIDevices: []" May 13 23:46:21.000252 containerd[1483]: time="2025-05-13T23:46:21.000196185Z" level=info msg="CreateContainer within sandbox \"b91247e92629a1a065199ca849689cc6032d0fe50ae4aaef99bbae9513a652e0\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"948cafca2159e61fead1ac814a5b6611402fbc5204ac3095f027b7eb24383fe1\"" May 13 23:46:21.000896 containerd[1483]: time="2025-05-13T23:46:21.000800619Z" level=info msg="StartContainer for \"948cafca2159e61fead1ac814a5b6611402fbc5204ac3095f027b7eb24383fe1\"" May 13 23:46:21.001675 containerd[1483]: time="2025-05-13T23:46:21.001645571Z" level=info msg="connecting to shim 948cafca2159e61fead1ac814a5b6611402fbc5204ac3095f027b7eb24383fe1" address="unix:///run/containerd/s/5b4933de32af26c0111fe097d8576d3fdd3aff72c264ef11ae72d364eb7a19e3" protocol=ttrpc version=3 May 13 23:46:21.027960 systemd[1]: Started cri-containerd-948cafca2159e61fead1ac814a5b6611402fbc5204ac3095f027b7eb24383fe1.scope - libcontainer container 948cafca2159e61fead1ac814a5b6611402fbc5204ac3095f027b7eb24383fe1. May 13 23:46:21.060282 containerd[1483]: time="2025-05-13T23:46:21.060172760Z" level=info msg="StartContainer for \"948cafca2159e61fead1ac814a5b6611402fbc5204ac3095f027b7eb24383fe1\" returns successfully" May 13 23:46:21.114506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-651567dff1ac967f5756c457425545813a620aca66e6a613d09210bc081bef91-rootfs.mount: Deactivated successfully. May 13 23:46:21.212704 containerd[1483]: time="2025-05-13T23:46:21.212663656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lmt2g,Uid:a787ab7d-0f43-4385-86f0-7cdfb593eb8f,Namespace:kube-system,Attempt:0,}" May 13 23:46:21.217918 containerd[1483]: time="2025-05-13T23:46:21.217589611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f744l,Uid:947b7408-4ee3-455d-a05e-a598ae98451a,Namespace:kube-system,Attempt:0,}" May 13 23:46:21.278997 containerd[1483]: time="2025-05-13T23:46:21.278856535Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lmt2g,Uid:a787ab7d-0f43-4385-86f0-7cdfb593eb8f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2699aeece643c4845c50274b2168eb7373e89674fd079646c02b517ed9c8389\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 23:46:21.279255 kubelet[2562]: E0513 23:46:21.279209 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2699aeece643c4845c50274b2168eb7373e89674fd079646c02b517ed9c8389\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 23:46:21.279463 kubelet[2562]: E0513 23:46:21.279280 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2699aeece643c4845c50274b2168eb7373e89674fd079646c02b517ed9c8389\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-lmt2g" May 13 23:46:21.279463 kubelet[2562]: E0513 23:46:21.279299 2562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2699aeece643c4845c50274b2168eb7373e89674fd079646c02b517ed9c8389\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-lmt2g" May 13 23:46:21.279463 kubelet[2562]: E0513 23:46:21.279356 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lmt2g_kube-system(a787ab7d-0f43-4385-86f0-7cdfb593eb8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lmt2g_kube-system(a787ab7d-0f43-4385-86f0-7cdfb593eb8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2699aeece643c4845c50274b2168eb7373e89674fd079646c02b517ed9c8389\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-lmt2g" podUID="a787ab7d-0f43-4385-86f0-7cdfb593eb8f" May 13 23:46:21.280370 systemd[1]: run-netns-cni\x2df732c47b\x2d0498\x2dc991\x2deeb3\x2dd3e1ea170bd0.mount: Deactivated successfully. May 13 23:46:21.282624 containerd[1483]: time="2025-05-13T23:46:21.282533142Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f744l,Uid:947b7408-4ee3-455d-a05e-a598ae98451a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c0e686465e8b62bf3310122abf4489aa3420720b8b1ffbc0ce35004442f3118\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 23:46:21.283138 kubelet[2562]: E0513 23:46:21.283091 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c0e686465e8b62bf3310122abf4489aa3420720b8b1ffbc0ce35004442f3118\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 23:46:21.283199 kubelet[2562]: E0513 23:46:21.283154 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c0e686465e8b62bf3310122abf4489aa3420720b8b1ffbc0ce35004442f3118\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-f744l" May 13 23:46:21.283199 kubelet[2562]: E0513 23:46:21.283171 2562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c0e686465e8b62bf3310122abf4489aa3420720b8b1ffbc0ce35004442f3118\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-f744l" May 13 23:46:21.283245 kubelet[2562]: E0513 23:46:21.283215 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-f744l_kube-system(947b7408-4ee3-455d-a05e-a598ae98451a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-f744l_kube-system(947b7408-4ee3-455d-a05e-a598ae98451a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c0e686465e8b62bf3310122abf4489aa3420720b8b1ffbc0ce35004442f3118\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-f744l" podUID="947b7408-4ee3-455d-a05e-a598ae98451a" May 13 23:46:21.283418 systemd[1]: run-netns-cni\x2de618fb81\x2dd7f3\x2d424c\x2d0ca2\x2ddd2bc99bbdab.mount: Deactivated successfully. May 13 23:46:22.017689 kubelet[2562]: I0513 23:46:22.017616 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-k9v68" podStartSLOduration=2.406274997 podStartE2EDuration="6.017599237s" podCreationTimestamp="2025-05-13 23:46:16 +0000 UTC" firstStartedPulling="2025-05-13 23:46:17.106626605 +0000 UTC m=+6.258615823" lastFinishedPulling="2025-05-13 23:46:20.717950885 +0000 UTC m=+9.869940063" observedRunningTime="2025-05-13 23:46:22.017315119 +0000 UTC m=+11.169304337" watchObservedRunningTime="2025-05-13 23:46:22.017599237 +0000 UTC m=+11.169588415" May 13 23:46:22.153557 systemd-networkd[1402]: flannel.1: Link UP May 13 23:46:22.153571 systemd-networkd[1402]: flannel.1: Gained carrier May 13 23:46:23.660001 systemd-networkd[1402]: flannel.1: Gained IPv6LL May 13 23:46:24.025275 update_engine[1475]: I20250513 23:46:24.025187 1475 update_attempter.cc:509] Updating boot flags... May 13 23:46:24.050788 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3188) May 13 23:46:35.944141 containerd[1483]: time="2025-05-13T23:46:35.944086850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f744l,Uid:947b7408-4ee3-455d-a05e-a598ae98451a,Namespace:kube-system,Attempt:0,}" May 13 23:46:35.996337 systemd-networkd[1402]: cni0: Link UP May 13 23:46:35.996343 systemd-networkd[1402]: cni0: Gained carrier May 13 23:46:36.000982 systemd-networkd[1402]: cni0: Lost carrier May 13 23:46:36.007385 systemd-networkd[1402]: veth4e7fd2e7: Link UP May 13 23:46:36.010789 kernel: cni0: port 1(veth4e7fd2e7) entered blocking state May 13 23:46:36.010916 kernel: cni0: port 1(veth4e7fd2e7) entered disabled state May 13 23:46:36.010950 kernel: veth4e7fd2e7: entered allmulticast mode May 13 23:46:36.011794 kernel: veth4e7fd2e7: entered promiscuous mode May 13 23:46:36.011840 kernel: cni0: port 1(veth4e7fd2e7) entered blocking state May 13 23:46:36.013258 kernel: cni0: port 1(veth4e7fd2e7) entered forwarding state May 13 23:46:36.014926 kernel: cni0: port 1(veth4e7fd2e7) entered disabled state May 13 23:46:36.031166 kernel: cni0: port 1(veth4e7fd2e7) entered blocking state May 13 23:46:36.031271 kernel: cni0: port 1(veth4e7fd2e7) entered forwarding state May 13 23:46:36.031479 systemd-networkd[1402]: veth4e7fd2e7: Gained carrier May 13 23:46:36.031786 systemd-networkd[1402]: cni0: Gained carrier May 13 23:46:36.033277 containerd[1483]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000016938), "name":"cbr0", "type":"bridge"} May 13 23:46:36.033277 containerd[1483]: delegateAdd: netconf sent to delegate plugin: May 13 23:46:36.076258 containerd[1483]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-13T23:46:36.076200920Z" level=info msg="connecting to shim d3fc8706544fe3bb935f40ca7b951340783904d84c3b8b489a2ab551cdd706c6" address="unix:///run/containerd/s/3141679d70656c20c05be012ebe3df8a45ef00516ed0523c5fa309f8cf5a6cb5" namespace=k8s.io protocol=ttrpc version=3 May 13 23:46:36.108961 systemd[1]: Started cri-containerd-d3fc8706544fe3bb935f40ca7b951340783904d84c3b8b489a2ab551cdd706c6.scope - libcontainer container d3fc8706544fe3bb935f40ca7b951340783904d84c3b8b489a2ab551cdd706c6. May 13 23:46:36.125643 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:46:36.147537 containerd[1483]: time="2025-05-13T23:46:36.147481797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f744l,Uid:947b7408-4ee3-455d-a05e-a598ae98451a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3fc8706544fe3bb935f40ca7b951340783904d84c3b8b489a2ab551cdd706c6\"" May 13 23:46:36.150433 containerd[1483]: time="2025-05-13T23:46:36.150290265Z" level=info msg="CreateContainer within sandbox \"d3fc8706544fe3bb935f40ca7b951340783904d84c3b8b489a2ab551cdd706c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:46:36.161755 containerd[1483]: time="2025-05-13T23:46:36.161685973Z" level=info msg="Container f269aa6c5d4e0bd7de7050775e66220f8793b7cda8d9b8fc50c4f1a78733ed91: CDI devices from CRI Config.CDIDevices: []" May 13 23:46:36.167146 containerd[1483]: time="2025-05-13T23:46:36.167084269Z" level=info msg="CreateContainer within sandbox \"d3fc8706544fe3bb935f40ca7b951340783904d84c3b8b489a2ab551cdd706c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f269aa6c5d4e0bd7de7050775e66220f8793b7cda8d9b8fc50c4f1a78733ed91\"" May 13 23:46:36.167975 containerd[1483]: time="2025-05-13T23:46:36.167904385Z" level=info msg="StartContainer for \"f269aa6c5d4e0bd7de7050775e66220f8793b7cda8d9b8fc50c4f1a78733ed91\"" May 13 23:46:36.169323 containerd[1483]: time="2025-05-13T23:46:36.169272779Z" level=info msg="connecting to shim f269aa6c5d4e0bd7de7050775e66220f8793b7cda8d9b8fc50c4f1a78733ed91" address="unix:///run/containerd/s/3141679d70656c20c05be012ebe3df8a45ef00516ed0523c5fa309f8cf5a6cb5" protocol=ttrpc version=3 May 13 23:46:36.196028 systemd[1]: Started cri-containerd-f269aa6c5d4e0bd7de7050775e66220f8793b7cda8d9b8fc50c4f1a78733ed91.scope - libcontainer container f269aa6c5d4e0bd7de7050775e66220f8793b7cda8d9b8fc50c4f1a78733ed91. May 13 23:46:36.227366 containerd[1483]: time="2025-05-13T23:46:36.227310556Z" level=info msg="StartContainer for \"f269aa6c5d4e0bd7de7050775e66220f8793b7cda8d9b8fc50c4f1a78733ed91\" returns successfully" May 13 23:46:36.946559 containerd[1483]: time="2025-05-13T23:46:36.946284898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lmt2g,Uid:a787ab7d-0f43-4385-86f0-7cdfb593eb8f,Namespace:kube-system,Attempt:0,}" May 13 23:46:36.962981 kernel: cni0: port 2(veth7360f0a6) entered blocking state May 13 23:46:36.963087 kernel: cni0: port 2(veth7360f0a6) entered disabled state May 13 23:46:36.963081 systemd-networkd[1402]: veth7360f0a6: Link UP May 13 23:46:36.964198 kernel: veth7360f0a6: entered allmulticast mode May 13 23:46:36.964954 kernel: veth7360f0a6: entered promiscuous mode May 13 23:46:36.965831 kernel: cni0: port 2(veth7360f0a6) entered blocking state May 13 23:46:36.965854 kernel: cni0: port 2(veth7360f0a6) entered forwarding state May 13 23:46:36.971202 systemd-networkd[1402]: veth7360f0a6: Gained carrier May 13 23:46:36.973455 containerd[1483]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} May 13 23:46:36.973455 containerd[1483]: delegateAdd: netconf sent to delegate plugin: May 13 23:46:37.004712 containerd[1483]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-13T23:46:37.004655394Z" level=info msg="connecting to shim fa44a9e96030e54f6b05fc480d7c5e51593b7dacac815e95a5e2252fdd0547f9" address="unix:///run/containerd/s/90dd40e48848e031a7d2b6f2d78bf8870b786cc5626ed312070f526fb6a387d1" namespace=k8s.io protocol=ttrpc version=3 May 13 23:46:37.032816 systemd[1]: Started cri-containerd-fa44a9e96030e54f6b05fc480d7c5e51593b7dacac815e95a5e2252fdd0547f9.scope - libcontainer container fa44a9e96030e54f6b05fc480d7c5e51593b7dacac815e95a5e2252fdd0547f9. May 13 23:46:37.047874 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:46:37.049221 kubelet[2562]: I0513 23:46:37.049136 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-f744l" podStartSLOduration=21.04911948 podStartE2EDuration="21.04911948s" podCreationTimestamp="2025-05-13 23:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:46:37.048841481 +0000 UTC m=+26.200830739" watchObservedRunningTime="2025-05-13 23:46:37.04911948 +0000 UTC m=+26.201108698" May 13 23:46:37.076601 containerd[1483]: time="2025-05-13T23:46:37.076534801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lmt2g,Uid:a787ab7d-0f43-4385-86f0-7cdfb593eb8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa44a9e96030e54f6b05fc480d7c5e51593b7dacac815e95a5e2252fdd0547f9\"" May 13 23:46:37.087677 containerd[1483]: time="2025-05-13T23:46:37.087615392Z" level=info msg="CreateContainer within sandbox \"fa44a9e96030e54f6b05fc480d7c5e51593b7dacac815e95a5e2252fdd0547f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:46:37.095809 containerd[1483]: time="2025-05-13T23:46:37.095542518Z" level=info msg="Container a8b330c6ee9ca06d711fe6157eb9b02361ca3e8c2cb920c7453c9a5c8b45c05c: CDI devices from CRI Config.CDIDevices: []" May 13 23:46:37.100812 containerd[1483]: time="2025-05-13T23:46:37.100768135Z" level=info msg="CreateContainer within sandbox \"fa44a9e96030e54f6b05fc480d7c5e51593b7dacac815e95a5e2252fdd0547f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a8b330c6ee9ca06d711fe6157eb9b02361ca3e8c2cb920c7453c9a5c8b45c05c\"" May 13 23:46:37.101635 containerd[1483]: time="2025-05-13T23:46:37.101589971Z" level=info msg="StartContainer for \"a8b330c6ee9ca06d711fe6157eb9b02361ca3e8c2cb920c7453c9a5c8b45c05c\"" May 13 23:46:37.102781 containerd[1483]: time="2025-05-13T23:46:37.102729486Z" level=info msg="connecting to shim a8b330c6ee9ca06d711fe6157eb9b02361ca3e8c2cb920c7453c9a5c8b45c05c" address="unix:///run/containerd/s/90dd40e48848e031a7d2b6f2d78bf8870b786cc5626ed312070f526fb6a387d1" protocol=ttrpc version=3 May 13 23:46:37.122998 systemd[1]: Started cri-containerd-a8b330c6ee9ca06d711fe6157eb9b02361ca3e8c2cb920c7453c9a5c8b45c05c.scope - libcontainer container a8b330c6ee9ca06d711fe6157eb9b02361ca3e8c2cb920c7453c9a5c8b45c05c. May 13 23:46:37.162418 containerd[1483]: time="2025-05-13T23:46:37.162366907Z" level=info msg="StartContainer for \"a8b330c6ee9ca06d711fe6157eb9b02361ca3e8c2cb920c7453c9a5c8b45c05c\" returns successfully" May 13 23:46:37.236422 systemd[1]: Started sshd@5-10.0.0.67:22-10.0.0.1:54024.service - OpenSSH per-connection server daemon (10.0.0.1:54024). May 13 23:46:37.300084 sshd[3479]: Accepted publickey for core from 10.0.0.1 port 54024 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:46:37.302068 sshd-session[3479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:37.307255 systemd-logind[1474]: New session 6 of user core. May 13 23:46:37.315937 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:46:37.419962 systemd-networkd[1402]: veth4e7fd2e7: Gained IPv6LL May 13 23:46:37.420237 systemd-networkd[1402]: cni0: Gained IPv6LL May 13 23:46:37.449786 sshd[3491]: Connection closed by 10.0.0.1 port 54024 May 13 23:46:37.450260 sshd-session[3479]: pam_unix(sshd:session): session closed for user core May 13 23:46:37.454073 systemd[1]: sshd@5-10.0.0.67:22-10.0.0.1:54024.service: Deactivated successfully. May 13 23:46:37.455955 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:46:37.456596 systemd-logind[1474]: Session 6 logged out. Waiting for processes to exit. May 13 23:46:37.457463 systemd-logind[1474]: Removed session 6. May 13 23:46:38.060335 kubelet[2562]: I0513 23:46:38.060267 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lmt2g" podStartSLOduration=22.060249004 podStartE2EDuration="22.060249004s" podCreationTimestamp="2025-05-13 23:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:46:38.059879685 +0000 UTC m=+27.211868903" watchObservedRunningTime="2025-05-13 23:46:38.060249004 +0000 UTC m=+27.212238222" May 13 23:46:38.955880 systemd-networkd[1402]: veth7360f0a6: Gained IPv6LL May 13 23:46:42.461166 systemd[1]: Started sshd@6-10.0.0.67:22-10.0.0.1:56082.service - OpenSSH per-connection server daemon (10.0.0.1:56082). May 13 23:46:42.526853 sshd[3546]: Accepted publickey for core from 10.0.0.1 port 56082 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:46:42.528196 sshd-session[3546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:42.532504 systemd-logind[1474]: New session 7 of user core. May 13 23:46:42.541939 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:46:42.655726 sshd[3548]: Connection closed by 10.0.0.1 port 56082 May 13 23:46:42.656115 sshd-session[3546]: pam_unix(sshd:session): session closed for user core May 13 23:46:42.659454 systemd[1]: sshd@6-10.0.0.67:22-10.0.0.1:56082.service: Deactivated successfully. May 13 23:46:42.661182 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:46:42.666799 systemd-logind[1474]: Session 7 logged out. Waiting for processes to exit. May 13 23:46:42.669117 systemd-logind[1474]: Removed session 7. May 13 23:46:47.674388 systemd[1]: Started sshd@7-10.0.0.67:22-10.0.0.1:56090.service - OpenSSH per-connection server daemon (10.0.0.1:56090). May 13 23:46:47.736530 sshd[3585]: Accepted publickey for core from 10.0.0.1 port 56090 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:46:47.737865 sshd-session[3585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:47.744541 systemd-logind[1474]: New session 8 of user core. May 13 23:46:47.756897 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:46:47.888711 sshd[3587]: Connection closed by 10.0.0.1 port 56090 May 13 23:46:47.890410 sshd-session[3585]: pam_unix(sshd:session): session closed for user core May 13 23:46:47.903331 systemd[1]: sshd@7-10.0.0.67:22-10.0.0.1:56090.service: Deactivated successfully. May 13 23:46:47.905905 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:46:47.908179 systemd-logind[1474]: Session 8 logged out. Waiting for processes to exit. May 13 23:46:47.911880 systemd-logind[1474]: Removed session 8. May 13 23:46:47.917542 systemd[1]: Started sshd@8-10.0.0.67:22-10.0.0.1:56096.service - OpenSSH per-connection server daemon (10.0.0.1:56096). May 13 23:46:47.983193 sshd[3601]: Accepted publickey for core from 10.0.0.1 port 56096 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:46:47.984572 sshd-session[3601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:47.991818 systemd-logind[1474]: New session 9 of user core. May 13 23:46:48.000937 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:46:48.179203 sshd[3603]: Connection closed by 10.0.0.1 port 56096 May 13 23:46:48.180303 sshd-session[3601]: pam_unix(sshd:session): session closed for user core May 13 23:46:48.193609 systemd[1]: sshd@8-10.0.0.67:22-10.0.0.1:56096.service: Deactivated successfully. May 13 23:46:48.196840 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:46:48.198527 systemd-logind[1474]: Session 9 logged out. Waiting for processes to exit. May 13 23:46:48.201610 systemd[1]: Started sshd@9-10.0.0.67:22-10.0.0.1:56102.service - OpenSSH per-connection server daemon (10.0.0.1:56102). May 13 23:46:48.204678 systemd-logind[1474]: Removed session 9. May 13 23:46:48.273161 sshd[3613]: Accepted publickey for core from 10.0.0.1 port 56102 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:46:48.275453 sshd-session[3613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:48.286108 systemd-logind[1474]: New session 10 of user core. May 13 23:46:48.293947 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:46:48.458319 sshd[3616]: Connection closed by 10.0.0.1 port 56102 May 13 23:46:48.459091 sshd-session[3613]: pam_unix(sshd:session): session closed for user core May 13 23:46:48.462463 systemd-logind[1474]: Session 10 logged out. Waiting for processes to exit. May 13 23:46:48.462732 systemd[1]: sshd@9-10.0.0.67:22-10.0.0.1:56102.service: Deactivated successfully. May 13 23:46:48.464560 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:46:48.465721 systemd-logind[1474]: Removed session 10. May 13 23:46:53.482645 systemd[1]: Started sshd@10-10.0.0.67:22-10.0.0.1:51088.service - OpenSSH per-connection server daemon (10.0.0.1:51088). May 13 23:46:53.553806 sshd[3650]: Accepted publickey for core from 10.0.0.1 port 51088 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:46:53.554809 sshd-session[3650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:53.561832 systemd-logind[1474]: New session 11 of user core. May 13 23:46:53.573158 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:46:53.723920 sshd[3652]: Connection closed by 10.0.0.1 port 51088 May 13 23:46:53.725979 sshd-session[3650]: pam_unix(sshd:session): session closed for user core May 13 23:46:53.741249 systemd[1]: sshd@10-10.0.0.67:22-10.0.0.1:51088.service: Deactivated successfully. May 13 23:46:53.744075 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:46:53.745568 systemd-logind[1474]: Session 11 logged out. Waiting for processes to exit. May 13 23:46:53.750111 systemd[1]: Started sshd@11-10.0.0.67:22-10.0.0.1:51096.service - OpenSSH per-connection server daemon (10.0.0.1:51096). May 13 23:46:53.752971 systemd-logind[1474]: Removed session 11. May 13 23:46:53.820251 sshd[3664]: Accepted publickey for core from 10.0.0.1 port 51096 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:46:53.821789 sshd-session[3664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:53.828718 systemd-logind[1474]: New session 12 of user core. May 13 23:46:53.839021 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:46:54.117482 sshd[3667]: Connection closed by 10.0.0.1 port 51096 May 13 23:46:54.119337 sshd-session[3664]: pam_unix(sshd:session): session closed for user core May 13 23:46:54.129894 systemd[1]: sshd@11-10.0.0.67:22-10.0.0.1:51096.service: Deactivated successfully. May 13 23:46:54.132899 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:46:54.135785 systemd-logind[1474]: Session 12 logged out. Waiting for processes to exit. May 13 23:46:54.138337 systemd-logind[1474]: Removed session 12. May 13 23:46:54.141354 systemd[1]: Started sshd@12-10.0.0.67:22-10.0.0.1:51100.service - OpenSSH per-connection server daemon (10.0.0.1:51100). May 13 23:46:54.208005 sshd[3677]: Accepted publickey for core from 10.0.0.1 port 51100 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:46:54.209940 sshd-session[3677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:54.214825 systemd-logind[1474]: New session 13 of user core. May 13 23:46:54.225974 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:46:55.537469 sshd[3680]: Connection closed by 10.0.0.1 port 51100 May 13 23:46:55.538493 sshd-session[3677]: pam_unix(sshd:session): session closed for user core May 13 23:46:55.555507 systemd[1]: sshd@12-10.0.0.67:22-10.0.0.1:51100.service: Deactivated successfully. May 13 23:46:55.564028 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:46:55.566610 systemd-logind[1474]: Session 13 logged out. Waiting for processes to exit. May 13 23:46:55.569896 systemd[1]: Started sshd@13-10.0.0.67:22-10.0.0.1:51116.service - OpenSSH per-connection server daemon (10.0.0.1:51116). May 13 23:46:55.572594 systemd-logind[1474]: Removed session 13. May 13 23:46:55.642492 sshd[3698]: Accepted publickey for core from 10.0.0.1 port 51116 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:46:55.644179 sshd-session[3698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:55.649673 systemd-logind[1474]: New session 14 of user core. May 13 23:46:55.666007 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:46:55.909998 sshd[3701]: Connection closed by 10.0.0.1 port 51116 May 13 23:46:55.910852 sshd-session[3698]: pam_unix(sshd:session): session closed for user core May 13 23:46:55.925032 systemd[1]: sshd@13-10.0.0.67:22-10.0.0.1:51116.service: Deactivated successfully. May 13 23:46:55.926930 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:46:55.927728 systemd-logind[1474]: Session 14 logged out. Waiting for processes to exit. May 13 23:46:55.929831 systemd[1]: Started sshd@14-10.0.0.67:22-10.0.0.1:51126.service - OpenSSH per-connection server daemon (10.0.0.1:51126). May 13 23:46:55.932738 systemd-logind[1474]: Removed session 14. May 13 23:46:55.990256 sshd[3711]: Accepted publickey for core from 10.0.0.1 port 51126 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:46:55.991929 sshd-session[3711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:46:55.996616 systemd-logind[1474]: New session 15 of user core. May 13 23:46:56.003963 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 23:46:56.119541 sshd[3714]: Connection closed by 10.0.0.1 port 51126 May 13 23:46:56.120083 sshd-session[3711]: pam_unix(sshd:session): session closed for user core May 13 23:46:56.123834 systemd[1]: sshd@14-10.0.0.67:22-10.0.0.1:51126.service: Deactivated successfully. May 13 23:46:56.125953 systemd[1]: session-15.scope: Deactivated successfully. May 13 23:46:56.126657 systemd-logind[1474]: Session 15 logged out. Waiting for processes to exit. May 13 23:46:56.127661 systemd-logind[1474]: Removed session 15. May 13 23:47:01.131899 systemd[1]: Started sshd@15-10.0.0.67:22-10.0.0.1:51130.service - OpenSSH per-connection server daemon (10.0.0.1:51130). May 13 23:47:01.191363 sshd[3751]: Accepted publickey for core from 10.0.0.1 port 51130 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:47:01.192948 sshd-session[3751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:01.199633 systemd-logind[1474]: New session 16 of user core. May 13 23:47:01.213991 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 23:47:01.352979 sshd[3753]: Connection closed by 10.0.0.1 port 51130 May 13 23:47:01.351981 sshd-session[3751]: pam_unix(sshd:session): session closed for user core May 13 23:47:01.355677 systemd[1]: sshd@15-10.0.0.67:22-10.0.0.1:51130.service: Deactivated successfully. May 13 23:47:01.361966 systemd[1]: session-16.scope: Deactivated successfully. May 13 23:47:01.363228 systemd-logind[1474]: Session 16 logged out. Waiting for processes to exit. May 13 23:47:01.364453 systemd-logind[1474]: Removed session 16. May 13 23:47:06.370407 systemd[1]: Started sshd@16-10.0.0.67:22-10.0.0.1:35986.service - OpenSSH per-connection server daemon (10.0.0.1:35986). May 13 23:47:06.426976 sshd[3787]: Accepted publickey for core from 10.0.0.1 port 35986 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:47:06.428584 sshd-session[3787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:06.436084 systemd-logind[1474]: New session 17 of user core. May 13 23:47:06.446375 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 23:47:06.570779 sshd[3789]: Connection closed by 10.0.0.1 port 35986 May 13 23:47:06.571324 sshd-session[3787]: pam_unix(sshd:session): session closed for user core May 13 23:47:06.575173 systemd[1]: sshd@16-10.0.0.67:22-10.0.0.1:35986.service: Deactivated successfully. May 13 23:47:06.577397 systemd[1]: session-17.scope: Deactivated successfully. May 13 23:47:06.579642 systemd-logind[1474]: Session 17 logged out. Waiting for processes to exit. May 13 23:47:06.581317 systemd-logind[1474]: Removed session 17. May 13 23:47:11.589337 systemd[1]: Started sshd@17-10.0.0.67:22-10.0.0.1:36002.service - OpenSSH per-connection server daemon (10.0.0.1:36002). May 13 23:47:11.647994 sshd[3825]: Accepted publickey for core from 10.0.0.1 port 36002 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:47:11.649321 sshd-session[3825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:11.653351 systemd-logind[1474]: New session 18 of user core. May 13 23:47:11.663002 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 23:47:11.785598 sshd[3827]: Connection closed by 10.0.0.1 port 36002 May 13 23:47:11.786226 sshd-session[3825]: pam_unix(sshd:session): session closed for user core May 13 23:47:11.789788 systemd[1]: sshd@17-10.0.0.67:22-10.0.0.1:36002.service: Deactivated successfully. May 13 23:47:11.791705 systemd[1]: session-18.scope: Deactivated successfully. May 13 23:47:11.793490 systemd-logind[1474]: Session 18 logged out. Waiting for processes to exit. May 13 23:47:11.794394 systemd-logind[1474]: Removed session 18.