May 13 23:47:12.898794 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 23:47:12.898816 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Tue May 13 22:07:09 -00 2025 May 13 23:47:12.898827 kernel: KASLR enabled May 13 23:47:12.898832 kernel: efi: EFI v2.7 by EDK II May 13 23:47:12.898838 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 May 13 23:47:12.898844 kernel: random: crng init done May 13 23:47:12.898851 kernel: secureboot: Secure boot disabled May 13 23:47:12.898857 kernel: ACPI: Early table checksum verification disabled May 13 23:47:12.898863 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 13 23:47:12.898871 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 23:47:12.898877 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:12.898883 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:12.898889 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:12.898895 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:12.898903 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:12.898910 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:12.898917 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:12.898923 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:12.898930 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:12.898936 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 23:47:12.898942 kernel: NUMA: Failed to initialise from firmware May 13 23:47:12.898949 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:47:12.898955 kernel: NUMA: NODE_DATA [mem 0xdc959800-0xdc95efff] May 13 23:47:12.898962 kernel: Zone ranges: May 13 23:47:12.898968 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:47:12.898976 kernel: DMA32 empty May 13 23:47:12.898982 kernel: Normal empty May 13 23:47:12.899011 kernel: Movable zone start for each node May 13 23:47:12.899018 kernel: Early memory node ranges May 13 23:47:12.899024 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] May 13 23:47:12.899031 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] May 13 23:47:12.899037 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] May 13 23:47:12.899044 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 13 23:47:12.899050 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 13 23:47:12.899057 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 23:47:12.899063 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 23:47:12.899070 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 23:47:12.899079 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 23:47:12.899085 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:47:12.899092 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 23:47:12.899101 kernel: psci: probing for conduit method from ACPI. May 13 23:47:12.899108 kernel: psci: PSCIv1.1 detected in firmware. May 13 23:47:12.899115 kernel: psci: Using standard PSCI v0.2 function IDs May 13 23:47:12.899123 kernel: psci: Trusted OS migration not required May 13 23:47:12.899130 kernel: psci: SMC Calling Convention v1.1 May 13 23:47:12.899137 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 23:47:12.899144 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 23:47:12.899151 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 23:47:12.899158 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 23:47:12.899165 kernel: Detected PIPT I-cache on CPU0 May 13 23:47:12.899171 kernel: CPU features: detected: GIC system register CPU interface May 13 23:47:12.899178 kernel: CPU features: detected: Hardware dirty bit management May 13 23:47:12.899185 kernel: CPU features: detected: Spectre-v4 May 13 23:47:12.899193 kernel: CPU features: detected: Spectre-BHB May 13 23:47:12.899200 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 23:47:12.899214 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 23:47:12.899221 kernel: CPU features: detected: ARM erratum 1418040 May 13 23:47:12.899227 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 23:47:12.899234 kernel: alternatives: applying boot alternatives May 13 23:47:12.899242 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2ebbcf70ac37c458a177d0106bebb5016b2973cc84d1c0207dc60f43e2803902 May 13 23:47:12.899249 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:47:12.899255 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:47:12.899262 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:47:12.899269 kernel: Fallback order for Node 0: 0 May 13 23:47:12.899278 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 23:47:12.899285 kernel: Policy zone: DMA May 13 23:47:12.899291 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:47:12.899298 kernel: software IO TLB: area num 4. May 13 23:47:12.899304 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 13 23:47:12.899311 kernel: Memory: 2387480K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 184808K reserved, 0K cma-reserved) May 13 23:47:12.899318 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 23:47:12.899324 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:47:12.899332 kernel: rcu: RCU event tracing is enabled. May 13 23:47:12.899339 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 23:47:12.899345 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:47:12.899352 kernel: Tracing variant of Tasks RCU enabled. May 13 23:47:12.899360 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:47:12.899367 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 23:47:12.899373 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 23:47:12.899380 kernel: GICv3: 256 SPIs implemented May 13 23:47:12.899387 kernel: GICv3: 0 Extended SPIs implemented May 13 23:47:12.899393 kernel: Root IRQ handler: gic_handle_irq May 13 23:47:12.899400 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 23:47:12.899407 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 23:47:12.899413 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 23:47:12.899420 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 13 23:47:12.899427 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 13 23:47:12.899449 kernel: GICv3: using LPI property table @0x00000000400f0000 May 13 23:47:12.899456 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 13 23:47:12.899463 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:47:12.899471 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:47:12.899477 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 23:47:12.899484 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 23:47:12.899491 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 23:47:12.899497 kernel: arm-pv: using stolen time PV May 13 23:47:12.899504 kernel: Console: colour dummy device 80x25 May 13 23:47:12.899511 kernel: ACPI: Core revision 20230628 May 13 23:47:12.899518 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 23:47:12.899526 kernel: pid_max: default: 32768 minimum: 301 May 13 23:47:12.899533 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:47:12.899540 kernel: landlock: Up and running. May 13 23:47:12.899546 kernel: SELinux: Initializing. May 13 23:47:12.899553 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:47:12.899560 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:47:12.899567 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 23:47:12.899574 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:47:12.899582 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:47:12.899590 kernel: rcu: Hierarchical SRCU implementation. May 13 23:47:12.899597 kernel: rcu: Max phase no-delay instances is 400. May 13 23:47:12.899604 kernel: Platform MSI: ITS@0x8080000 domain created May 13 23:47:12.899610 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 23:47:12.899617 kernel: Remapping and enabling EFI services. May 13 23:47:12.899624 kernel: smp: Bringing up secondary CPUs ... May 13 23:47:12.899630 kernel: Detected PIPT I-cache on CPU1 May 13 23:47:12.899637 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 23:47:12.899644 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 13 23:47:12.899652 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:47:12.899659 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 23:47:12.899671 kernel: Detected PIPT I-cache on CPU2 May 13 23:47:12.899680 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 23:47:12.899687 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 13 23:47:12.899694 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:47:12.899701 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 23:47:12.899708 kernel: Detected PIPT I-cache on CPU3 May 13 23:47:12.899715 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 23:47:12.899722 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 13 23:47:12.899731 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:47:12.899737 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 23:47:12.899744 kernel: smp: Brought up 1 node, 4 CPUs May 13 23:47:12.899751 kernel: SMP: Total of 4 processors activated. May 13 23:47:12.899758 kernel: CPU features: detected: 32-bit EL0 Support May 13 23:47:12.899766 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 23:47:12.899773 kernel: CPU features: detected: Common not Private translations May 13 23:47:12.899781 kernel: CPU features: detected: CRC32 instructions May 13 23:47:12.899788 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 23:47:12.899795 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 23:47:12.899802 kernel: CPU features: detected: LSE atomic instructions May 13 23:47:12.899809 kernel: CPU features: detected: Privileged Access Never May 13 23:47:12.899816 kernel: CPU features: detected: RAS Extension Support May 13 23:47:12.899823 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 23:47:12.899830 kernel: CPU: All CPU(s) started at EL1 May 13 23:47:12.899837 kernel: alternatives: applying system-wide alternatives May 13 23:47:12.899846 kernel: devtmpfs: initialized May 13 23:47:12.899853 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:47:12.899860 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 23:47:12.899867 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:47:12.899874 kernel: SMBIOS 3.0.0 present. May 13 23:47:12.899881 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 13 23:47:12.899888 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:47:12.899895 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 23:47:12.899902 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 23:47:12.899911 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 23:47:12.899918 kernel: audit: initializing netlink subsys (disabled) May 13 23:47:12.899926 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 May 13 23:47:12.899933 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:47:12.899940 kernel: cpuidle: using governor menu May 13 23:47:12.899947 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 23:47:12.899954 kernel: ASID allocator initialised with 32768 entries May 13 23:47:12.899961 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:47:12.899968 kernel: Serial: AMBA PL011 UART driver May 13 23:47:12.899976 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 23:47:12.899984 kernel: Modules: 0 pages in range for non-PLT usage May 13 23:47:12.899997 kernel: Modules: 509264 pages in range for PLT usage May 13 23:47:12.900004 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:47:12.900012 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:47:12.900019 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 23:47:12.900026 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 23:47:12.900033 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:47:12.900040 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:47:12.900049 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 23:47:12.900056 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 23:47:12.900063 kernel: ACPI: Added _OSI(Module Device) May 13 23:47:12.900070 kernel: ACPI: Added _OSI(Processor Device) May 13 23:47:12.900077 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:47:12.900084 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:47:12.900091 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:47:12.900098 kernel: ACPI: Interpreter enabled May 13 23:47:12.900105 kernel: ACPI: Using GIC for interrupt routing May 13 23:47:12.900112 kernel: ACPI: MCFG table detected, 1 entries May 13 23:47:12.900120 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 23:47:12.900127 kernel: printk: console [ttyAMA0] enabled May 13 23:47:12.900135 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 23:47:12.900284 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:47:12.900360 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 23:47:12.900429 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 23:47:12.900494 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 23:47:12.900584 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 23:47:12.900595 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 23:47:12.900603 kernel: PCI host bridge to bus 0000:00 May 13 23:47:12.900675 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 23:47:12.900737 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 23:47:12.900797 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 23:47:12.900856 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:47:12.900942 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 23:47:12.901097 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 23:47:12.901172 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 23:47:12.901264 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 23:47:12.901337 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 23:47:12.901405 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 23:47:12.901471 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 23:47:12.901544 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 23:47:12.901605 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 23:47:12.901663 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 23:47:12.901722 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 23:47:12.901731 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 23:47:12.901739 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 23:47:12.901746 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 23:47:12.901756 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 23:47:12.901763 kernel: iommu: Default domain type: Translated May 13 23:47:12.901771 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 23:47:12.901778 kernel: efivars: Registered efivars operations May 13 23:47:12.901786 kernel: vgaarb: loaded May 13 23:47:12.901793 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 23:47:12.901800 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:47:12.901808 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:47:12.901815 kernel: pnp: PnP ACPI init May 13 23:47:12.901888 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 23:47:12.901899 kernel: pnp: PnP ACPI: found 1 devices May 13 23:47:12.901907 kernel: NET: Registered PF_INET protocol family May 13 23:47:12.901914 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:47:12.901922 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 23:47:12.901929 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:47:12.901937 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:47:12.901944 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 23:47:12.901954 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 23:47:12.901961 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:47:12.901969 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:47:12.901976 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:47:12.901984 kernel: PCI: CLS 0 bytes, default 64 May 13 23:47:12.902010 kernel: kvm [1]: HYP mode not available May 13 23:47:12.902018 kernel: Initialise system trusted keyrings May 13 23:47:12.902026 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 23:47:12.902033 kernel: Key type asymmetric registered May 13 23:47:12.902043 kernel: Asymmetric key parser 'x509' registered May 13 23:47:12.902050 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 23:47:12.902058 kernel: io scheduler mq-deadline registered May 13 23:47:12.902065 kernel: io scheduler kyber registered May 13 23:47:12.902072 kernel: io scheduler bfq registered May 13 23:47:12.902080 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 23:47:12.902087 kernel: ACPI: button: Power Button [PWRB] May 13 23:47:12.902095 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 23:47:12.902171 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 23:47:12.902184 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:47:12.902191 kernel: thunder_xcv, ver 1.0 May 13 23:47:12.902199 kernel: thunder_bgx, ver 1.0 May 13 23:47:12.902214 kernel: nicpf, ver 1.0 May 13 23:47:12.902222 kernel: nicvf, ver 1.0 May 13 23:47:12.902302 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 23:47:12.902368 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T23:47:12 UTC (1747180032) May 13 23:47:12.902378 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 23:47:12.902386 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 23:47:12.902410 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 23:47:12.902417 kernel: watchdog: Hard watchdog permanently disabled May 13 23:47:12.902424 kernel: NET: Registered PF_INET6 protocol family May 13 23:47:12.902432 kernel: Segment Routing with IPv6 May 13 23:47:12.902439 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:47:12.902447 kernel: NET: Registered PF_PACKET protocol family May 13 23:47:12.902454 kernel: Key type dns_resolver registered May 13 23:47:12.902461 kernel: registered taskstats version 1 May 13 23:47:12.902469 kernel: Loading compiled-in X.509 certificates May 13 23:47:12.902478 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: a696ab665a89a9a0c31af520821335479551e0bb' May 13 23:47:12.902485 kernel: Key type .fscrypt registered May 13 23:47:12.902492 kernel: Key type fscrypt-provisioning registered May 13 23:47:12.902499 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:47:12.902506 kernel: ima: Allocated hash algorithm: sha1 May 13 23:47:12.902513 kernel: ima: No architecture policies found May 13 23:47:12.902520 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 23:47:12.902527 kernel: clk: Disabling unused clocks May 13 23:47:12.902536 kernel: Freeing unused kernel memory: 38336K May 13 23:47:12.902543 kernel: Run /init as init process May 13 23:47:12.902551 kernel: with arguments: May 13 23:47:12.902558 kernel: /init May 13 23:47:12.902565 kernel: with environment: May 13 23:47:12.902572 kernel: HOME=/ May 13 23:47:12.902579 kernel: TERM=linux May 13 23:47:12.902586 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:47:12.902594 systemd[1]: Successfully made /usr/ read-only. May 13 23:47:12.902605 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:47:12.902613 systemd[1]: Detected virtualization kvm. May 13 23:47:12.902621 systemd[1]: Detected architecture arm64. May 13 23:47:12.902628 systemd[1]: Running in initrd. May 13 23:47:12.902636 systemd[1]: No hostname configured, using default hostname. May 13 23:47:12.902643 systemd[1]: Hostname set to . May 13 23:47:12.902651 systemd[1]: Initializing machine ID from VM UUID. May 13 23:47:12.902660 systemd[1]: Queued start job for default target initrd.target. May 13 23:47:12.902668 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:47:12.902676 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:47:12.902684 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:47:12.902692 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:47:12.902700 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:47:12.902708 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:47:12.902718 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:47:12.902727 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:47:12.902734 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:47:12.902742 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:47:12.902749 systemd[1]: Reached target paths.target - Path Units. May 13 23:47:12.902757 systemd[1]: Reached target slices.target - Slice Units. May 13 23:47:12.902765 systemd[1]: Reached target swap.target - Swaps. May 13 23:47:12.902772 systemd[1]: Reached target timers.target - Timer Units. May 13 23:47:12.902780 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:47:12.902789 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:47:12.902797 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:47:12.902805 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:47:12.902813 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:47:12.902820 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:47:12.902828 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:47:12.902836 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:47:12.902844 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:47:12.902853 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:47:12.902861 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:47:12.902868 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:47:12.902876 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:47:12.902884 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:47:12.902891 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:47:12.902899 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:47:12.902907 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:47:12.902916 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:47:12.902924 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:47:12.902948 systemd-journald[239]: Collecting audit messages is disabled. May 13 23:47:12.902969 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:47:12.902977 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:47:12.903005 systemd-journald[239]: Journal started May 13 23:47:12.903038 systemd-journald[239]: Runtime Journal (/run/log/journal/6ba65ee1ea8842bc88c749a2682e04bd) is 5.9M, max 47.3M, 41.4M free. May 13 23:47:12.894099 systemd-modules-load[240]: Inserted module 'overlay' May 13 23:47:12.905519 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:47:12.906535 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:47:12.909560 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:47:12.909580 kernel: Bridge firewalling registered May 13 23:47:12.909976 systemd-modules-load[240]: Inserted module 'br_netfilter' May 13 23:47:12.911117 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:47:12.913392 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:47:12.914737 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:47:12.916288 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:47:12.922441 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:47:12.924543 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:47:12.927044 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:47:12.930295 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:47:12.931330 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:47:12.934949 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:47:12.943412 dracut-cmdline[272]: dracut-dracut-053 May 13 23:47:12.945794 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2ebbcf70ac37c458a177d0106bebb5016b2973cc84d1c0207dc60f43e2803902 May 13 23:47:12.965518 systemd-resolved[278]: Positive Trust Anchors: May 13 23:47:12.965538 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:47:12.965570 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:47:12.970259 systemd-resolved[278]: Defaulting to hostname 'linux'. May 13 23:47:12.971678 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:47:12.972851 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:47:13.016040 kernel: SCSI subsystem initialized May 13 23:47:13.021009 kernel: Loading iSCSI transport class v2.0-870. May 13 23:47:13.028016 kernel: iscsi: registered transport (tcp) May 13 23:47:13.041027 kernel: iscsi: registered transport (qla4xxx) May 13 23:47:13.041082 kernel: QLogic iSCSI HBA Driver May 13 23:47:13.081974 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:47:13.090183 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:47:13.107756 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:47:13.107809 kernel: device-mapper: uevent: version 1.0.3 May 13 23:47:13.107823 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:47:13.154013 kernel: raid6: neonx8 gen() 15795 MB/s May 13 23:47:13.171014 kernel: raid6: neonx4 gen() 15600 MB/s May 13 23:47:13.188010 kernel: raid6: neonx2 gen() 12943 MB/s May 13 23:47:13.205008 kernel: raid6: neonx1 gen() 10401 MB/s May 13 23:47:13.222004 kernel: raid6: int64x8 gen() 6769 MB/s May 13 23:47:13.239007 kernel: raid6: int64x4 gen() 7322 MB/s May 13 23:47:13.256007 kernel: raid6: int64x2 gen() 6033 MB/s May 13 23:47:13.273005 kernel: raid6: int64x1 gen() 5015 MB/s May 13 23:47:13.273021 kernel: raid6: using algorithm neonx8 gen() 15795 MB/s May 13 23:47:13.290012 kernel: raid6: .... xor() 11339 MB/s, rmw enabled May 13 23:47:13.290025 kernel: raid6: using neon recovery algorithm May 13 23:47:13.295001 kernel: xor: measuring software checksum speed May 13 23:47:13.295019 kernel: 8regs : 21647 MB/sec May 13 23:47:13.296456 kernel: 32regs : 20444 MB/sec May 13 23:47:13.296470 kernel: arm64_neon : 27917 MB/sec May 13 23:47:13.296479 kernel: xor: using function: arm64_neon (27917 MB/sec) May 13 23:47:13.347010 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:47:13.357666 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:47:13.366211 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:47:13.379043 systemd-udevd[461]: Using default interface naming scheme 'v255'. May 13 23:47:13.382770 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:47:13.385464 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:47:13.400030 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation May 13 23:47:13.426105 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:47:13.436153 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:47:13.476023 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:47:13.484175 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:47:13.497883 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:47:13.499283 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:47:13.501160 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:47:13.503630 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:47:13.515315 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:47:13.523903 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:47:13.530428 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 23:47:13.530574 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 23:47:13.537299 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:47:13.537336 kernel: GPT:9289727 != 19775487 May 13 23:47:13.537353 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:47:13.537363 kernel: GPT:9289727 != 19775487 May 13 23:47:13.538046 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:47:13.538064 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:47:13.540484 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:47:13.540753 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:47:13.545352 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:47:13.546254 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:47:13.546394 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:47:13.550878 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:47:13.558336 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:47:13.565961 kernel: BTRFS: device fsid 3ace022a-b896-4c57-9fc3-590600d2a560 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (517) May 13 23:47:13.568017 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (512) May 13 23:47:13.583790 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:47:13.591354 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 23:47:13.598685 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 23:47:13.610980 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:47:13.617823 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 23:47:13.618752 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 23:47:13.632392 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:47:13.634421 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:47:13.639285 disk-uuid[557]: Primary Header is updated. May 13 23:47:13.639285 disk-uuid[557]: Secondary Entries is updated. May 13 23:47:13.639285 disk-uuid[557]: Secondary Header is updated. May 13 23:47:13.646041 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:47:13.652074 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:47:14.654030 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:47:14.654146 disk-uuid[558]: The operation has completed successfully. May 13 23:47:14.690236 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:47:14.690331 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:47:14.721165 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:47:14.727544 sh[577]: Success May 13 23:47:14.745266 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 23:47:14.775742 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:47:14.789538 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:47:14.791049 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:47:14.804025 kernel: BTRFS info (device dm-0): first mount of filesystem 3ace022a-b896-4c57-9fc3-590600d2a560 May 13 23:47:14.804104 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 23:47:14.804125 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:47:14.804144 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:47:14.805328 kernel: BTRFS info (device dm-0): using free space tree May 13 23:47:14.809982 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:47:14.810871 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:47:14.817210 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:47:14.818654 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:47:14.837752 kernel: BTRFS info (device vda6): first mount of filesystem cae47f07-14c5-46aa-b49d-052e48518cb3 May 13 23:47:14.837808 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:47:14.837819 kernel: BTRFS info (device vda6): using free space tree May 13 23:47:14.842004 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:47:14.846028 kernel: BTRFS info (device vda6): last unmount of filesystem cae47f07-14c5-46aa-b49d-052e48518cb3 May 13 23:47:14.849673 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:47:14.869451 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:47:14.916791 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:47:14.926254 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:47:14.966072 systemd-networkd[761]: lo: Link UP May 13 23:47:14.966084 systemd-networkd[761]: lo: Gained carrier May 13 23:47:14.967156 systemd-networkd[761]: Enumeration completed May 13 23:47:14.967328 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:47:14.968193 systemd[1]: Reached target network.target - Network. May 13 23:47:14.970214 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:47:14.970218 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:47:14.972883 systemd-networkd[761]: eth0: Link UP May 13 23:47:14.972886 systemd-networkd[761]: eth0: Gained carrier May 13 23:47:14.972893 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:47:14.985736 ignition[672]: Ignition 2.20.0 May 13 23:47:14.985745 ignition[672]: Stage: fetch-offline May 13 23:47:14.985784 ignition[672]: no configs at "/usr/lib/ignition/base.d" May 13 23:47:14.985793 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:47:14.985945 ignition[672]: parsed url from cmdline: "" May 13 23:47:14.985948 ignition[672]: no config URL provided May 13 23:47:14.985953 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:47:14.985960 ignition[672]: no config at "/usr/lib/ignition/user.ign" May 13 23:47:14.985982 ignition[672]: op(1): [started] loading QEMU firmware config module May 13 23:47:14.985986 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 23:47:14.990878 ignition[672]: op(1): [finished] loading QEMU firmware config module May 13 23:47:14.993054 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:47:15.013022 ignition[672]: parsing config with SHA512: 2db320074b6992d7a9f4d686a31edbb3a005c96e991252aea7b066f4a43c60ce2e497f1e2f2e93f6f449c46fde1c99f2d623696431f917c26d27ac5e3a345619 May 13 23:47:15.017576 unknown[672]: fetched base config from "system" May 13 23:47:15.017586 unknown[672]: fetched user config from "qemu" May 13 23:47:15.017963 ignition[672]: fetch-offline: fetch-offline passed May 13 23:47:15.018064 ignition[672]: Ignition finished successfully May 13 23:47:15.019886 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:47:15.021163 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 23:47:15.032159 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:47:15.043957 ignition[775]: Ignition 2.20.0 May 13 23:47:15.043967 ignition[775]: Stage: kargs May 13 23:47:15.044202 ignition[775]: no configs at "/usr/lib/ignition/base.d" May 13 23:47:15.044212 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:47:15.045030 ignition[775]: kargs: kargs passed May 13 23:47:15.048007 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:47:15.045073 ignition[775]: Ignition finished successfully May 13 23:47:15.049629 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:47:15.062210 ignition[784]: Ignition 2.20.0 May 13 23:47:15.062220 ignition[784]: Stage: disks May 13 23:47:15.062376 ignition[784]: no configs at "/usr/lib/ignition/base.d" May 13 23:47:15.062385 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:47:15.064134 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:47:15.063195 ignition[784]: disks: disks passed May 13 23:47:15.065774 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:47:15.063241 ignition[784]: Ignition finished successfully May 13 23:47:15.066825 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:47:15.069052 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:47:15.070702 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:47:15.072111 systemd[1]: Reached target basic.target - Basic System. May 13 23:47:15.087252 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:47:15.097750 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:47:15.102066 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:47:15.114196 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:47:15.165023 kernel: EXT4-fs (vda9): mounted filesystem 2a058080-4242-485a-9945-403b4258c5f5 r/w with ordered data mode. Quota mode: none. May 13 23:47:15.165053 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:47:15.166268 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:47:15.178118 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:47:15.180680 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:47:15.181506 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 23:47:15.181546 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:47:15.181571 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:47:15.186807 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:47:15.189801 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:47:15.193501 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (804) May 13 23:47:15.193534 kernel: BTRFS info (device vda6): first mount of filesystem cae47f07-14c5-46aa-b49d-052e48518cb3 May 13 23:47:15.193545 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:47:15.194188 kernel: BTRFS info (device vda6): using free space tree May 13 23:47:15.197003 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:47:15.197658 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:47:15.237051 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:47:15.240955 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory May 13 23:47:15.244534 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:47:15.248292 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:47:15.322699 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:47:15.335122 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:47:15.336596 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:47:15.342054 kernel: BTRFS info (device vda6): last unmount of filesystem cae47f07-14c5-46aa-b49d-052e48518cb3 May 13 23:47:15.357212 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:47:15.360471 ignition[917]: INFO : Ignition 2.20.0 May 13 23:47:15.360471 ignition[917]: INFO : Stage: mount May 13 23:47:15.361736 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:47:15.361736 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:47:15.361736 ignition[917]: INFO : mount: mount passed May 13 23:47:15.361736 ignition[917]: INFO : Ignition finished successfully May 13 23:47:15.364245 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:47:15.372104 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:47:15.930479 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:47:15.941223 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:47:15.948823 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929) May 13 23:47:15.948861 kernel: BTRFS info (device vda6): first mount of filesystem cae47f07-14c5-46aa-b49d-052e48518cb3 May 13 23:47:15.948872 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:47:15.950743 kernel: BTRFS info (device vda6): using free space tree May 13 23:47:15.953018 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:47:15.954408 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:47:15.971641 ignition[946]: INFO : Ignition 2.20.0 May 13 23:47:15.971641 ignition[946]: INFO : Stage: files May 13 23:47:15.972905 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:47:15.972905 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:47:15.972905 ignition[946]: DEBUG : files: compiled without relabeling support, skipping May 13 23:47:15.975866 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:47:15.975866 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:47:15.979212 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:47:15.980241 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:47:15.980241 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:47:15.979944 unknown[946]: wrote ssh authorized keys file for user: core May 13 23:47:15.983323 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 23:47:15.983323 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 23:47:16.115658 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:47:16.159144 systemd-networkd[761]: eth0: Gained IPv6LL May 13 23:47:16.794215 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 23:47:16.796059 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 23:47:16.796059 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:47:16.796059 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:47:16.796059 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:47:16.796059 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:47:16.796059 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:47:16.796059 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:47:16.796059 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:47:16.796059 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:47:16.796059 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:47:16.796059 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:47:16.796059 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:47:16.796059 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:47:16.796059 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 13 23:47:17.100480 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 23:47:17.445151 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:47:17.445151 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 23:47:17.448709 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:47:17.448709 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:47:17.448709 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 23:47:17.448709 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 23:47:17.448709 ignition[946]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:47:17.448709 ignition[946]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:47:17.448709 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 23:47:17.448709 ignition[946]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 13 23:47:17.470349 ignition[946]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:47:17.473596 ignition[946]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:47:17.476174 ignition[946]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 13 23:47:17.476174 ignition[946]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 13 23:47:17.476174 ignition[946]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:47:17.476174 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:47:17.476174 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:47:17.476174 ignition[946]: INFO : files: files passed May 13 23:47:17.476174 ignition[946]: INFO : Ignition finished successfully May 13 23:47:17.477868 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:47:17.488190 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:47:17.491555 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:47:17.494717 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:47:17.495588 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:47:17.498375 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory May 13 23:47:17.500837 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:47:17.500837 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:47:17.503209 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:47:17.502917 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:47:17.504223 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:47:17.507727 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:47:17.546949 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:47:17.547098 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:47:17.548837 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:47:17.550410 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:47:17.551936 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:47:17.552867 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:47:17.570850 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:47:17.584190 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:47:17.592002 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:47:17.594401 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:47:17.595892 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:47:17.597623 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:47:17.597753 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:47:17.599691 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:47:17.601341 systemd[1]: Stopped target basic.target - Basic System. May 13 23:47:17.602624 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:47:17.603897 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:47:17.605386 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:47:17.606879 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:47:17.608490 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:47:17.609970 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:47:17.611677 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:47:17.612944 systemd[1]: Stopped target swap.target - Swaps. May 13 23:47:17.614744 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:47:17.614885 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:47:17.616701 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:47:17.618233 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:47:17.620014 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:47:17.620178 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:47:17.621874 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:47:17.622005 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:47:17.624303 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:47:17.624424 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:47:17.626022 systemd[1]: Stopped target paths.target - Path Units. May 13 23:47:17.627412 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:47:17.627589 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:47:17.628954 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:47:17.630454 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:47:17.632023 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:47:17.632118 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:47:17.633521 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:47:17.633606 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:47:17.635561 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:47:17.635695 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:47:17.637095 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:47:17.637209 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:47:17.652222 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:47:17.652938 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:47:17.653084 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:47:17.659975 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:47:17.662211 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:47:17.662353 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:47:17.667739 ignition[1001]: INFO : Ignition 2.20.0 May 13 23:47:17.667739 ignition[1001]: INFO : Stage: umount May 13 23:47:17.667739 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:47:17.667739 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:47:17.667739 ignition[1001]: INFO : umount: umount passed May 13 23:47:17.667739 ignition[1001]: INFO : Ignition finished successfully May 13 23:47:17.665453 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:47:17.665558 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:47:17.669429 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:47:17.669522 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:47:17.673637 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:47:17.673721 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:47:17.682270 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:47:17.683091 systemd[1]: Stopped target network.target - Network. May 13 23:47:17.684392 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:47:17.684466 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:47:17.687474 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:47:17.687531 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:47:17.688952 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:47:17.689023 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:47:17.690770 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:47:17.690815 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:47:17.694573 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:47:17.697555 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:47:17.699320 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:47:17.699415 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:47:17.701768 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:47:17.701893 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:47:17.706265 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:47:17.706384 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:47:17.709593 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:47:17.709879 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:47:17.709918 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:47:17.715574 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:47:17.715813 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:47:17.715908 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:47:17.718273 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:47:17.718799 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:47:17.718886 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:47:17.732149 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:47:17.732828 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:47:17.732894 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:47:17.734628 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:47:17.734676 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:47:17.736256 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:47:17.736299 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:47:17.737884 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:47:17.742412 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:47:17.748562 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:47:17.748664 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:47:17.751431 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:47:17.751552 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:47:17.753392 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:47:17.753466 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:47:17.755409 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:47:17.755445 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:47:17.756976 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:47:17.757063 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:47:17.759159 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:47:17.759205 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:47:17.761423 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:47:17.761471 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:47:17.774197 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:47:17.775019 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:47:17.775081 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:47:17.777807 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:47:17.777854 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:47:17.782254 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:47:17.783067 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:47:17.785220 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:47:17.787054 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:47:17.796328 systemd[1]: Switching root. May 13 23:47:17.825064 systemd-journald[239]: Journal stopped May 13 23:47:18.672597 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 13 23:47:18.672651 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:47:18.672662 kernel: SELinux: policy capability open_perms=1 May 13 23:47:18.672672 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:47:18.672681 kernel: SELinux: policy capability always_check_network=0 May 13 23:47:18.672691 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:47:18.672700 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:47:18.672709 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:47:18.672720 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:47:18.672730 kernel: audit: type=1403 audit(1747180038.023:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:47:18.672742 systemd[1]: Successfully loaded SELinux policy in 32.405ms. May 13 23:47:18.672762 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.276ms. May 13 23:47:18.672773 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:47:18.672784 systemd[1]: Detected virtualization kvm. May 13 23:47:18.672794 systemd[1]: Detected architecture arm64. May 13 23:47:18.672804 systemd[1]: Detected first boot. May 13 23:47:18.672814 systemd[1]: Initializing machine ID from VM UUID. May 13 23:47:18.672825 zram_generator::config[1050]: No configuration found. May 13 23:47:18.672838 kernel: NET: Registered PF_VSOCK protocol family May 13 23:47:18.672848 systemd[1]: Populated /etc with preset unit settings. May 13 23:47:18.672859 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:47:18.672870 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:47:18.672881 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:47:18.672892 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:47:18.672904 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:47:18.672916 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:47:18.672927 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:47:18.672940 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:47:18.672952 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:47:18.672967 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:47:18.672979 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:47:18.673109 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:47:18.673137 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:47:18.673151 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:47:18.673345 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:47:18.673365 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:47:18.674117 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:47:18.674152 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:47:18.674164 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 23:47:18.674175 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:47:18.674185 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:47:18.674196 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:47:18.674207 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:47:18.674224 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:47:18.674240 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:47:18.674251 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:47:18.674265 systemd[1]: Reached target slices.target - Slice Units. May 13 23:47:18.674276 systemd[1]: Reached target swap.target - Swaps. May 13 23:47:18.674287 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:47:18.674297 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:47:18.674308 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:47:18.674319 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:47:18.674332 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:47:18.674343 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:47:18.674354 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:47:18.674365 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:47:18.674376 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:47:18.674387 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:47:18.674398 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:47:18.674408 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:47:18.674419 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:47:18.674433 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:47:18.674444 systemd[1]: Reached target machines.target - Containers. May 13 23:47:18.674454 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:47:18.674466 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:47:18.674476 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:47:18.674487 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:47:18.674499 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:47:18.674510 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:47:18.674523 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:47:18.674534 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:47:18.674544 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:47:18.674555 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:47:18.674567 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:47:18.674577 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:47:18.674588 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:47:18.674598 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:47:18.674609 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:47:18.674621 kernel: fuse: init (API version 7.39) May 13 23:47:18.674631 kernel: loop: module loaded May 13 23:47:18.674641 kernel: ACPI: bus type drm_connector registered May 13 23:47:18.674651 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:47:18.674661 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:47:18.674672 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:47:18.674683 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:47:18.674717 systemd-journald[1118]: Collecting audit messages is disabled. May 13 23:47:18.674746 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:47:18.674757 systemd-journald[1118]: Journal started May 13 23:47:18.674781 systemd-journald[1118]: Runtime Journal (/run/log/journal/6ba65ee1ea8842bc88c749a2682e04bd) is 5.9M, max 47.3M, 41.4M free. May 13 23:47:18.469101 systemd[1]: Queued start job for default target multi-user.target. May 13 23:47:18.481053 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 23:47:18.483275 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:47:18.677602 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:47:18.677657 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:47:18.678255 systemd[1]: Stopped verity-setup.service. May 13 23:47:18.684493 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:47:18.685232 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:47:18.686219 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:47:18.687162 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:47:18.688154 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:47:18.689111 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:47:18.690244 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:47:18.691338 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:47:18.694048 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:47:18.695261 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:47:18.695431 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:47:18.696589 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:47:18.696756 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:47:18.699356 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:47:18.699515 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:47:18.700579 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:47:18.700751 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:47:18.701959 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:47:18.702165 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:47:18.703459 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:47:18.703650 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:47:18.704934 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:47:18.706376 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:47:18.707646 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:47:18.709184 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:47:18.724971 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:47:18.737111 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:47:18.739098 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:47:18.739900 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:47:18.739938 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:47:18.741614 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:47:18.744256 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:47:18.746434 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:47:18.747427 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:47:18.749137 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:47:18.751091 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:47:18.752126 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:47:18.756241 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:47:18.757244 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:47:18.759452 systemd-journald[1118]: Time spent on flushing to /var/log/journal/6ba65ee1ea8842bc88c749a2682e04bd is 26.951ms for 864 entries. May 13 23:47:18.759452 systemd-journald[1118]: System Journal (/var/log/journal/6ba65ee1ea8842bc88c749a2682e04bd) is 8M, max 195.6M, 187.6M free. May 13 23:47:18.799848 systemd-journald[1118]: Received client request to flush runtime journal. May 13 23:47:18.760348 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:47:18.767769 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:47:18.769962 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:47:18.774767 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:47:18.777160 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:47:18.779439 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:47:18.780923 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:47:18.786332 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:47:18.790404 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:47:18.803226 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:47:18.806249 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:47:18.809032 kernel: loop0: detected capacity change from 0 to 113512 May 13 23:47:18.811848 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:47:18.825389 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:47:18.826971 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:47:18.836334 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:47:18.837371 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 23:47:18.849052 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:47:18.859236 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:47:18.859986 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. May 13 23:47:18.860113 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. May 13 23:47:18.866060 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:47:18.900019 kernel: loop1: detected capacity change from 0 to 123192 May 13 23:47:18.949081 kernel: loop2: detected capacity change from 0 to 194096 May 13 23:47:18.982314 kernel: loop3: detected capacity change from 0 to 113512 May 13 23:47:18.988019 kernel: loop4: detected capacity change from 0 to 123192 May 13 23:47:18.994090 kernel: loop5: detected capacity change from 0 to 194096 May 13 23:47:19.002699 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 23:47:19.003225 (sd-merge)[1193]: Merged extensions into '/usr'. May 13 23:47:19.007044 systemd[1]: Reload requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:47:19.007067 systemd[1]: Reloading... May 13 23:47:19.073008 zram_generator::config[1220]: No configuration found. May 13 23:47:19.129175 ldconfig[1163]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:47:19.178579 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:47:19.229605 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:47:19.229719 systemd[1]: Reloading finished in 222 ms. May 13 23:47:19.250645 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:47:19.252295 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:47:19.278431 systemd[1]: Starting ensure-sysext.service... May 13 23:47:19.280263 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:47:19.286202 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:47:19.294282 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:47:19.297638 systemd[1]: Reload requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... May 13 23:47:19.297654 systemd[1]: Reloading... May 13 23:47:19.304496 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:47:19.304706 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:47:19.305409 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:47:19.305625 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. May 13 23:47:19.305680 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. May 13 23:47:19.309165 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:47:19.309178 systemd-tmpfiles[1256]: Skipping /boot May 13 23:47:19.319277 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:47:19.319293 systemd-tmpfiles[1256]: Skipping /boot May 13 23:47:19.327198 systemd-udevd[1259]: Using default interface naming scheme 'v255'. May 13 23:47:19.361048 zram_generator::config[1290]: No configuration found. May 13 23:47:19.427025 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1292) May 13 23:47:19.474892 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:47:19.548449 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 23:47:19.548627 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:47:19.550055 systemd[1]: Reloading finished in 252 ms. May 13 23:47:19.565622 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:47:19.581813 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:47:19.601234 systemd[1]: Finished ensure-sysext.service. May 13 23:47:19.602268 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:47:19.629269 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:47:19.631812 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:47:19.632883 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:47:19.634188 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:47:19.639301 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:47:19.642769 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:47:19.648335 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:47:19.652714 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:47:19.654068 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:47:19.656278 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:47:19.659381 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:47:19.664294 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:47:19.667762 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:47:19.673440 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:47:19.684278 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 23:47:19.688268 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:47:19.691649 lvm[1354]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:47:19.695222 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:47:19.696917 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:47:19.700099 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:47:19.701503 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:47:19.701682 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:47:19.704912 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:47:19.705131 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:47:19.706580 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:47:19.706740 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:47:19.708319 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:47:19.717908 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:47:19.721511 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:47:19.721813 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:47:19.734808 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:47:19.740371 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:47:19.743375 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:47:19.744964 augenrules[1395]: No rules May 13 23:47:19.745188 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:47:19.746822 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:47:19.747241 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:47:19.748706 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:47:19.750603 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:47:19.758743 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:47:19.768247 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:47:19.769221 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:47:19.771775 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:47:19.772641 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:47:19.780000 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:47:19.808030 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:47:19.850636 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 23:47:19.852128 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:47:19.878253 systemd-resolved[1370]: Positive Trust Anchors: May 13 23:47:19.878268 systemd-resolved[1370]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:47:19.878300 systemd-resolved[1370]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:47:19.878314 systemd-networkd[1367]: lo: Link UP May 13 23:47:19.878318 systemd-networkd[1367]: lo: Gained carrier May 13 23:47:19.879295 systemd-networkd[1367]: Enumeration completed May 13 23:47:19.879446 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:47:19.880096 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:47:19.880111 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:47:19.882447 systemd-networkd[1367]: eth0: Link UP May 13 23:47:19.882456 systemd-networkd[1367]: eth0: Gained carrier May 13 23:47:19.882471 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:47:19.885605 systemd-resolved[1370]: Defaulting to hostname 'linux'. May 13 23:47:19.887243 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:47:19.889447 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:47:19.891143 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:47:19.892210 systemd[1]: Reached target network.target - Network. May 13 23:47:19.893161 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:47:19.894415 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:47:19.897625 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:47:19.898794 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:47:19.900034 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:47:19.901230 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:47:19.902251 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:47:19.903341 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:47:19.903470 systemd[1]: Reached target paths.target - Path Units. May 13 23:47:19.904266 systemd[1]: Reached target timers.target - Timer Units. May 13 23:47:19.906081 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:47:19.908395 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:47:19.910148 systemd-networkd[1367]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:47:19.911615 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:47:19.912962 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:47:19.914028 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:47:19.916970 systemd-timesyncd[1371]: Network configuration changed, trying to establish connection. May 13 23:47:19.920809 systemd-timesyncd[1371]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 23:47:19.920863 systemd-timesyncd[1371]: Initial clock synchronization to Tue 2025-05-13 23:47:19.757160 UTC. May 13 23:47:19.926958 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:47:19.928774 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:47:19.931500 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:47:19.932919 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:47:19.934876 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:47:19.935911 systemd[1]: Reached target basic.target - Basic System. May 13 23:47:19.936907 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:47:19.936945 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:47:19.948235 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:47:19.950255 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:47:19.954603 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:47:19.956763 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:47:19.957657 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:47:19.961230 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:47:19.965668 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:47:19.969762 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:47:19.970389 jq[1426]: false May 13 23:47:19.972140 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:47:19.978974 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:47:19.980811 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:47:19.981897 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:47:19.983147 extend-filesystems[1427]: Found loop3 May 13 23:47:19.983939 extend-filesystems[1427]: Found loop4 May 13 23:47:19.983939 extend-filesystems[1427]: Found loop5 May 13 23:47:19.983939 extend-filesystems[1427]: Found vda May 13 23:47:19.983939 extend-filesystems[1427]: Found vda1 May 13 23:47:19.983939 extend-filesystems[1427]: Found vda2 May 13 23:47:19.983939 extend-filesystems[1427]: Found vda3 May 13 23:47:19.983939 extend-filesystems[1427]: Found usr May 13 23:47:19.983939 extend-filesystems[1427]: Found vda4 May 13 23:47:19.983939 extend-filesystems[1427]: Found vda6 May 13 23:47:19.983939 extend-filesystems[1427]: Found vda7 May 13 23:47:19.983939 extend-filesystems[1427]: Found vda9 May 13 23:47:19.983939 extend-filesystems[1427]: Checking size of /dev/vda9 May 13 23:47:19.986274 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:47:19.995634 dbus-daemon[1425]: [system] SELinux support is enabled May 13 23:47:19.988885 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:47:19.995579 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:47:19.996058 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:47:20.002756 jq[1442]: true May 13 23:47:19.996265 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:47:20.003707 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:47:20.004063 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:47:20.011783 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:47:20.015205 jq[1448]: true May 13 23:47:20.016821 extend-filesystems[1427]: Resized partition /dev/vda9 May 13 23:47:20.014094 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:47:20.019331 (ntainerd)[1449]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:47:20.030404 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:47:20.030439 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:47:20.033363 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:47:20.033445 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:47:20.036731 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) May 13 23:47:20.045008 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 23:47:20.045714 tar[1446]: linux-arm64/helm May 13 23:47:20.061109 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1313) May 13 23:47:20.075517 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 23:47:20.100691 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 23:47:20.100691 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 23:47:20.100691 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 23:47:20.107170 extend-filesystems[1427]: Resized filesystem in /dev/vda9 May 13 23:47:20.108510 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:47:20.108728 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:47:20.113291 systemd-logind[1434]: Watching system buttons on /dev/input/event0 (Power Button) May 13 23:47:20.114019 bash[1477]: Updated "/home/core/.ssh/authorized_keys" May 13 23:47:20.114280 systemd-logind[1434]: New seat seat0. May 13 23:47:20.119098 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:47:20.120361 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:47:20.123916 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:47:20.127721 update_engine[1436]: I20250513 23:47:20.127190 1436 main.cc:92] Flatcar Update Engine starting May 13 23:47:20.138398 systemd[1]: Started update-engine.service - Update Engine. May 13 23:47:20.138539 update_engine[1436]: I20250513 23:47:20.138393 1436 update_check_scheduler.cc:74] Next update check in 4m21s May 13 23:47:20.144301 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:47:20.251069 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:47:20.305420 containerd[1449]: time="2025-05-13T23:47:20.305287189Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 13 23:47:20.333700 containerd[1449]: time="2025-05-13T23:47:20.333647731Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 23:47:20.335330 containerd[1449]: time="2025-05-13T23:47:20.335292011Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 23:47:20.335330 containerd[1449]: time="2025-05-13T23:47:20.335327432Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 23:47:20.335411 containerd[1449]: time="2025-05-13T23:47:20.335346514Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 23:47:20.335522 containerd[1449]: time="2025-05-13T23:47:20.335503909Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 23:47:20.335546 containerd[1449]: time="2025-05-13T23:47:20.335527261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 23:47:20.335596 containerd[1449]: time="2025-05-13T23:47:20.335581489Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:47:20.335626 containerd[1449]: time="2025-05-13T23:47:20.335596065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 23:47:20.335819 containerd[1449]: time="2025-05-13T23:47:20.335797422Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:47:20.335819 containerd[1449]: time="2025-05-13T23:47:20.335816622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 23:47:20.335885 containerd[1449]: time="2025-05-13T23:47:20.335830727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:47:20.335885 containerd[1449]: time="2025-05-13T23:47:20.335840445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 23:47:20.335926 containerd[1449]: time="2025-05-13T23:47:20.335910816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 23:47:20.336163 containerd[1449]: time="2025-05-13T23:47:20.336143871Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 23:47:20.336295 containerd[1449]: time="2025-05-13T23:47:20.336273094Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:47:20.336295 containerd[1449]: time="2025-05-13T23:47:20.336290805Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 23:47:20.336385 containerd[1449]: time="2025-05-13T23:47:20.336370305Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 23:47:20.336454 containerd[1449]: time="2025-05-13T23:47:20.336417559Z" level=info msg="metadata content store policy set" policy=shared May 13 23:47:20.343187 containerd[1449]: time="2025-05-13T23:47:20.343136874Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 23:47:20.343307 containerd[1449]: time="2025-05-13T23:47:20.343213123Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 23:47:20.343307 containerd[1449]: time="2025-05-13T23:47:20.343232400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 23:47:20.343307 containerd[1449]: time="2025-05-13T23:47:20.343259319Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 23:47:20.343307 containerd[1449]: time="2025-05-13T23:47:20.343275462Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 23:47:20.343493 containerd[1449]: time="2025-05-13T23:47:20.343473998Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 23:47:20.343897 containerd[1449]: time="2025-05-13T23:47:20.343736910Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 23:47:20.343897 containerd[1449]: time="2025-05-13T23:47:20.343856729Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 23:47:20.343897 containerd[1449]: time="2025-05-13T23:47:20.343876673Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 23:47:20.344013 containerd[1449]: time="2025-05-13T23:47:20.343900888Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 23:47:20.344013 containerd[1449]: time="2025-05-13T23:47:20.343915385Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 23:47:20.344013 containerd[1449]: time="2025-05-13T23:47:20.343929334Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 23:47:20.344013 containerd[1449]: time="2025-05-13T23:47:20.343953588Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 23:47:20.344013 containerd[1449]: time="2025-05-13T23:47:20.343974785Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 23:47:20.344013 containerd[1449]: time="2025-05-13T23:47:20.344004133Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 23:47:20.344130 containerd[1449]: time="2025-05-13T23:47:20.344019610Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 23:47:20.344130 containerd[1449]: time="2025-05-13T23:47:20.344032540Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 23:47:20.344130 containerd[1449]: time="2025-05-13T23:47:20.344043824Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 23:47:20.344130 containerd[1449]: time="2025-05-13T23:47:20.344068588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 23:47:20.344130 containerd[1449]: time="2025-05-13T23:47:20.344081283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 23:47:20.344130 containerd[1449]: time="2025-05-13T23:47:20.344093076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 23:47:20.344130 containerd[1449]: time="2025-05-13T23:47:20.344105654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 23:47:20.344130 containerd[1449]: time="2025-05-13T23:47:20.344120347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 23:47:20.344130 containerd[1449]: time="2025-05-13T23:47:20.344133317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 23:47:20.344275 containerd[1449]: time="2025-05-13T23:47:20.344144993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 23:47:20.344275 containerd[1449]: time="2025-05-13T23:47:20.344156983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 23:47:20.344275 containerd[1449]: time="2025-05-13T23:47:20.344168150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 23:47:20.344275 containerd[1449]: time="2025-05-13T23:47:20.344180727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 23:47:20.344275 containerd[1449]: time="2025-05-13T23:47:20.344191345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 23:47:20.344275 containerd[1449]: time="2025-05-13T23:47:20.344203531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 23:47:20.344275 containerd[1449]: time="2025-05-13T23:47:20.344214855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 23:47:20.344275 containerd[1449]: time="2025-05-13T23:47:20.344228177Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 23:47:20.344275 containerd[1449]: time="2025-05-13T23:47:20.344247141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 23:47:20.344275 containerd[1449]: time="2025-05-13T23:47:20.344260306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 23:47:20.344275 containerd[1449]: time="2025-05-13T23:47:20.344270180Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 23:47:20.344446 containerd[1449]: time="2025-05-13T23:47:20.344432747Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 23:47:20.344465 containerd[1449]: time="2025-05-13T23:47:20.344450026Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 23:47:20.344465 containerd[1449]: time="2025-05-13T23:47:20.344459900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 23:47:20.344500 containerd[1449]: time="2025-05-13T23:47:20.344472047Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 23:47:20.344500 containerd[1449]: time="2025-05-13T23:47:20.344481568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 23:47:20.344500 containerd[1449]: time="2025-05-13T23:47:20.344493558Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 23:47:20.344548 containerd[1449]: time="2025-05-13T23:47:20.344508956Z" level=info msg="NRI interface is disabled by configuration." May 13 23:47:20.344548 containerd[1449]: time="2025-05-13T23:47:20.344523140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 23:47:20.344926 containerd[1449]: time="2025-05-13T23:47:20.344868688Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 23:47:20.345129 containerd[1449]: time="2025-05-13T23:47:20.344934357Z" level=info msg="Connect containerd service" May 13 23:47:20.345129 containerd[1449]: time="2025-05-13T23:47:20.344981846Z" level=info msg="using legacy CRI server" May 13 23:47:20.345129 containerd[1449]: time="2025-05-13T23:47:20.345005786Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:47:20.345259 containerd[1449]: time="2025-05-13T23:47:20.345242564Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 23:47:20.345934 containerd[1449]: time="2025-05-13T23:47:20.345894321Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:47:20.346494 containerd[1449]: time="2025-05-13T23:47:20.346442911Z" level=info msg="Start subscribing containerd event" May 13 23:47:20.346540 containerd[1449]: time="2025-05-13T23:47:20.346502468Z" level=info msg="Start recovering state" May 13 23:47:20.346601 containerd[1449]: time="2025-05-13T23:47:20.346584829Z" level=info msg="Start event monitor" May 13 23:47:20.348421 containerd[1449]: time="2025-05-13T23:47:20.348402961Z" level=info msg="Start snapshots syncer" May 13 23:47:20.348421 containerd[1449]: time="2025-05-13T23:47:20.348422043Z" level=info msg="Start cni network conf syncer for default" May 13 23:47:20.348503 containerd[1449]: time="2025-05-13T23:47:20.348436148Z" level=info msg="Start streaming server" May 13 23:47:20.352915 containerd[1449]: time="2025-05-13T23:47:20.352879409Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:47:20.352980 containerd[1449]: time="2025-05-13T23:47:20.352940337Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:47:20.353061 containerd[1449]: time="2025-05-13T23:47:20.353015058Z" level=info msg="containerd successfully booted in 0.051348s" May 13 23:47:20.353107 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:47:20.422717 tar[1446]: linux-arm64/LICENSE May 13 23:47:20.422717 tar[1446]: linux-arm64/README.md May 13 23:47:20.434020 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:47:20.972047 sshd_keygen[1447]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:47:20.993071 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:47:20.999280 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:47:21.005117 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:47:21.005358 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:47:21.010279 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:47:21.019815 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:47:21.032397 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:47:21.034708 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 23:47:21.035873 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:47:21.471117 systemd-networkd[1367]: eth0: Gained IPv6LL May 13 23:47:21.473520 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:47:21.475377 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:47:21.491342 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 23:47:21.494484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:21.496653 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:47:21.514953 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 23:47:21.515260 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 23:47:21.516835 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:47:21.530317 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:47:22.129124 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:22.130723 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:47:22.134803 (kubelet)[1538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:47:22.136668 systemd[1]: Startup finished in 569ms (kernel) + 5.326s (initrd) + 4.144s (userspace) = 10.040s. May 13 23:47:23.267207 kubelet[1538]: E0513 23:47:23.267103 1538 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:47:23.269996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:47:23.270165 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:47:23.271069 systemd[1]: kubelet.service: Consumed 1.319s CPU time, 241.6M memory peak. May 13 23:47:25.227887 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:47:25.237275 systemd[1]: Started sshd@0-10.0.0.122:22-10.0.0.1:33200.service - OpenSSH per-connection server daemon (10.0.0.1:33200). May 13 23:47:25.311099 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 33200 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:47:25.313406 sshd-session[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:25.321101 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:47:25.334305 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:47:25.340011 systemd-logind[1434]: New session 1 of user core. May 13 23:47:25.346605 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:47:25.349318 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:47:25.356588 (systemd)[1556]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:47:25.359942 systemd-logind[1434]: New session c1 of user core. May 13 23:47:25.481944 systemd[1556]: Queued start job for default target default.target. May 13 23:47:25.492970 systemd[1556]: Created slice app.slice - User Application Slice. May 13 23:47:25.493023 systemd[1556]: Reached target paths.target - Paths. May 13 23:47:25.493063 systemd[1556]: Reached target timers.target - Timers. May 13 23:47:25.494326 systemd[1556]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:47:25.504086 systemd[1556]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:47:25.504159 systemd[1556]: Reached target sockets.target - Sockets. May 13 23:47:25.504204 systemd[1556]: Reached target basic.target - Basic System. May 13 23:47:25.504233 systemd[1556]: Reached target default.target - Main User Target. May 13 23:47:25.504259 systemd[1556]: Startup finished in 137ms. May 13 23:47:25.504462 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:47:25.505896 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:47:25.573984 systemd[1]: Started sshd@1-10.0.0.122:22-10.0.0.1:33214.service - OpenSSH per-connection server daemon (10.0.0.1:33214). May 13 23:47:25.616975 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 33214 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:47:25.618209 sshd-session[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:25.622372 systemd-logind[1434]: New session 2 of user core. May 13 23:47:25.641188 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:47:25.692708 sshd[1569]: Connection closed by 10.0.0.1 port 33214 May 13 23:47:25.693155 sshd-session[1567]: pam_unix(sshd:session): session closed for user core May 13 23:47:25.706110 systemd[1]: sshd@1-10.0.0.122:22-10.0.0.1:33214.service: Deactivated successfully. May 13 23:47:25.707861 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:47:25.710066 systemd-logind[1434]: Session 2 logged out. Waiting for processes to exit. May 13 23:47:25.711319 systemd[1]: Started sshd@2-10.0.0.122:22-10.0.0.1:33218.service - OpenSSH per-connection server daemon (10.0.0.1:33218). May 13 23:47:25.713532 systemd-logind[1434]: Removed session 2. May 13 23:47:25.759678 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 33218 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:47:25.760814 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:25.765100 systemd-logind[1434]: New session 3 of user core. May 13 23:47:25.774225 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:47:25.822907 sshd[1577]: Connection closed by 10.0.0.1 port 33218 May 13 23:47:25.823274 sshd-session[1574]: pam_unix(sshd:session): session closed for user core May 13 23:47:25.838829 systemd[1]: sshd@2-10.0.0.122:22-10.0.0.1:33218.service: Deactivated successfully. May 13 23:47:25.840542 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:47:25.842207 systemd-logind[1434]: Session 3 logged out. Waiting for processes to exit. May 13 23:47:25.852499 systemd[1]: Started sshd@3-10.0.0.122:22-10.0.0.1:33226.service - OpenSSH per-connection server daemon (10.0.0.1:33226). May 13 23:47:25.853760 systemd-logind[1434]: Removed session 3. May 13 23:47:25.893350 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 33226 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:47:25.894587 sshd-session[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:25.899125 systemd-logind[1434]: New session 4 of user core. May 13 23:47:25.911193 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:47:25.962517 sshd[1585]: Connection closed by 10.0.0.1 port 33226 May 13 23:47:25.962878 sshd-session[1582]: pam_unix(sshd:session): session closed for user core May 13 23:47:25.983170 systemd[1]: sshd@3-10.0.0.122:22-10.0.0.1:33226.service: Deactivated successfully. May 13 23:47:25.985770 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:47:25.988092 systemd-logind[1434]: Session 4 logged out. Waiting for processes to exit. May 13 23:47:25.988706 systemd[1]: Started sshd@4-10.0.0.122:22-10.0.0.1:33238.service - OpenSSH per-connection server daemon (10.0.0.1:33238). May 13 23:47:25.990500 systemd-logind[1434]: Removed session 4. May 13 23:47:26.031888 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 33238 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:47:26.033103 sshd-session[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:47:26.037712 systemd-logind[1434]: New session 5 of user core. May 13 23:47:26.047238 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:47:26.170626 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:47:26.170927 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:47:26.576293 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:47:26.576447 (dockerd)[1613]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:47:26.848603 dockerd[1613]: time="2025-05-13T23:47:26.847609370Z" level=info msg="Starting up" May 13 23:47:27.071303 dockerd[1613]: time="2025-05-13T23:47:27.071066126Z" level=info msg="Loading containers: start." May 13 23:47:27.257048 kernel: Initializing XFRM netlink socket May 13 23:47:27.382665 systemd-networkd[1367]: docker0: Link UP May 13 23:47:27.416349 dockerd[1613]: time="2025-05-13T23:47:27.416294109Z" level=info msg="Loading containers: done." May 13 23:47:27.438560 dockerd[1613]: time="2025-05-13T23:47:27.438506647Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:47:27.438709 dockerd[1613]: time="2025-05-13T23:47:27.438637627Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 13 23:47:27.438841 dockerd[1613]: time="2025-05-13T23:47:27.438808444Z" level=info msg="Daemon has completed initialization" May 13 23:47:27.472776 dockerd[1613]: time="2025-05-13T23:47:27.472542966Z" level=info msg="API listen on /run/docker.sock" May 13 23:47:27.472733 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:47:28.249928 containerd[1449]: time="2025-05-13T23:47:28.249799376Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 23:47:28.877491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount726154081.mount: Deactivated successfully. May 13 23:47:30.391181 containerd[1449]: time="2025-05-13T23:47:30.391125289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:30.392402 containerd[1449]: time="2025-05-13T23:47:30.392358420Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 13 23:47:30.393110 containerd[1449]: time="2025-05-13T23:47:30.393043794Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:30.395948 containerd[1449]: time="2025-05-13T23:47:30.395917733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:30.397310 containerd[1449]: time="2025-05-13T23:47:30.397129301Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.147283278s" May 13 23:47:30.397310 containerd[1449]: time="2025-05-13T23:47:30.397176645Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 13 23:47:30.416370 containerd[1449]: time="2025-05-13T23:47:30.416317735Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 23:47:32.174287 containerd[1449]: time="2025-05-13T23:47:32.174230052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:32.175603 containerd[1449]: time="2025-05-13T23:47:32.175554495Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 13 23:47:32.176569 containerd[1449]: time="2025-05-13T23:47:32.176522293Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:32.180025 containerd[1449]: time="2025-05-13T23:47:32.179972188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:32.181688 containerd[1449]: time="2025-05-13T23:47:32.181531380Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.764901005s" May 13 23:47:32.181688 containerd[1449]: time="2025-05-13T23:47:32.181563806Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 13 23:47:32.201006 containerd[1449]: time="2025-05-13T23:47:32.200945104Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 23:47:33.187231 containerd[1449]: time="2025-05-13T23:47:33.187179615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:33.188305 containerd[1449]: time="2025-05-13T23:47:33.188093470Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 13 23:47:33.189277 containerd[1449]: time="2025-05-13T23:47:33.189237692Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:33.192911 containerd[1449]: time="2025-05-13T23:47:33.192855848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:33.194013 containerd[1449]: time="2025-05-13T23:47:33.193961171Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 992.978138ms" May 13 23:47:33.194118 containerd[1449]: time="2025-05-13T23:47:33.194014697Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 13 23:47:33.215711 containerd[1449]: time="2025-05-13T23:47:33.215427382Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 23:47:33.520494 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:47:33.530227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:33.633142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:33.637050 (kubelet)[1906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:47:33.679380 kubelet[1906]: E0513 23:47:33.679285 1906 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:47:33.683140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:47:33.683279 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:47:33.683765 systemd[1]: kubelet.service: Consumed 134ms CPU time, 97.5M memory peak. May 13 23:47:34.292358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2226305343.mount: Deactivated successfully. May 13 23:47:34.494414 containerd[1449]: time="2025-05-13T23:47:34.494370598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:34.495699 containerd[1449]: time="2025-05-13T23:47:34.495624313Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 13 23:47:34.496613 containerd[1449]: time="2025-05-13T23:47:34.496566174Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:34.498428 containerd[1449]: time="2025-05-13T23:47:34.498397063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:34.499760 containerd[1449]: time="2025-05-13T23:47:34.499736507Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.284258105s" May 13 23:47:34.499834 containerd[1449]: time="2025-05-13T23:47:34.499763382Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 13 23:47:34.517028 containerd[1449]: time="2025-05-13T23:47:34.516981324Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 23:47:35.021512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3175332076.mount: Deactivated successfully. May 13 23:47:35.785534 containerd[1449]: time="2025-05-13T23:47:35.785488354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:35.787140 containerd[1449]: time="2025-05-13T23:47:35.787061921Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 13 23:47:35.789041 containerd[1449]: time="2025-05-13T23:47:35.788248837Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:35.791355 containerd[1449]: time="2025-05-13T23:47:35.791299319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:35.792401 containerd[1449]: time="2025-05-13T23:47:35.792355078Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.275320716s" May 13 23:47:35.792401 containerd[1449]: time="2025-05-13T23:47:35.792393412Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 23:47:35.812465 containerd[1449]: time="2025-05-13T23:47:35.812249004Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 23:47:36.281265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1831582503.mount: Deactivated successfully. May 13 23:47:36.286039 containerd[1449]: time="2025-05-13T23:47:36.286000303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:36.286620 containerd[1449]: time="2025-05-13T23:47:36.286580060Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 13 23:47:36.287553 containerd[1449]: time="2025-05-13T23:47:36.287525054Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:36.290222 containerd[1449]: time="2025-05-13T23:47:36.289886061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:36.290779 containerd[1449]: time="2025-05-13T23:47:36.290755357Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 478.332751ms" May 13 23:47:36.290829 containerd[1449]: time="2025-05-13T23:47:36.290787120Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 13 23:47:36.310652 containerd[1449]: time="2025-05-13T23:47:36.310612309Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 23:47:36.852711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3194732615.mount: Deactivated successfully. May 13 23:47:38.897942 containerd[1449]: time="2025-05-13T23:47:38.897885709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:38.898410 containerd[1449]: time="2025-05-13T23:47:38.898288643Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 13 23:47:38.899257 containerd[1449]: time="2025-05-13T23:47:38.899229661Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:38.902629 containerd[1449]: time="2025-05-13T23:47:38.902575546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:47:38.904185 containerd[1449]: time="2025-05-13T23:47:38.904135936Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.593480407s" May 13 23:47:38.904246 containerd[1449]: time="2025-05-13T23:47:38.904187561Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 13 23:47:43.435903 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:43.436068 systemd[1]: kubelet.service: Consumed 134ms CPU time, 97.5M memory peak. May 13 23:47:43.450255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:43.466197 systemd[1]: Reload requested from client PID 2121 ('systemctl') (unit session-5.scope)... May 13 23:47:43.466215 systemd[1]: Reloading... May 13 23:47:43.547044 zram_generator::config[2166]: No configuration found. May 13 23:47:43.663035 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:47:43.738676 systemd[1]: Reloading finished in 272 ms. May 13 23:47:43.785780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:43.788172 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:43.790098 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:47:43.791083 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:43.791151 systemd[1]: kubelet.service: Consumed 84ms CPU time, 82.4M memory peak. May 13 23:47:43.792903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:44.045677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:44.053113 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:47:44.102903 kubelet[2212]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:47:44.102903 kubelet[2212]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:47:44.102903 kubelet[2212]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:47:44.103300 kubelet[2212]: I0513 23:47:44.102984 2212 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:47:45.017439 kubelet[2212]: I0513 23:47:45.017385 2212 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 23:47:45.017439 kubelet[2212]: I0513 23:47:45.017417 2212 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:47:45.017696 kubelet[2212]: I0513 23:47:45.017647 2212 server.go:927] "Client rotation is on, will bootstrap in background" May 13 23:47:45.051460 kubelet[2212]: E0513 23:47:45.051427 2212 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.122:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.122:6443: connect: connection refused May 13 23:47:45.052181 kubelet[2212]: I0513 23:47:45.052062 2212 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:47:45.070758 kubelet[2212]: I0513 23:47:45.070723 2212 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:47:45.071165 kubelet[2212]: I0513 23:47:45.071130 2212 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:47:45.071331 kubelet[2212]: I0513 23:47:45.071165 2212 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 23:47:45.071404 kubelet[2212]: I0513 23:47:45.071393 2212 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:47:45.071404 kubelet[2212]: I0513 23:47:45.071404 2212 container_manager_linux.go:301] "Creating device plugin manager" May 13 23:47:45.071679 kubelet[2212]: I0513 23:47:45.071658 2212 state_mem.go:36] "Initialized new in-memory state store" May 13 23:47:45.072754 kubelet[2212]: I0513 23:47:45.072724 2212 kubelet.go:400] "Attempting to sync node with API server" May 13 23:47:45.072754 kubelet[2212]: I0513 23:47:45.072751 2212 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:47:45.072861 kubelet[2212]: I0513 23:47:45.072844 2212 kubelet.go:312] "Adding apiserver pod source" May 13 23:47:45.073079 kubelet[2212]: I0513 23:47:45.073064 2212 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:47:45.073516 kubelet[2212]: W0513 23:47:45.073457 2212 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused May 13 23:47:45.073555 kubelet[2212]: E0513 23:47:45.073537 2212 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused May 13 23:47:45.073788 kubelet[2212]: W0513 23:47:45.073751 2212 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused May 13 23:47:45.073843 kubelet[2212]: E0513 23:47:45.073795 2212 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused May 13 23:47:45.076576 kubelet[2212]: I0513 23:47:45.076285 2212 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 13 23:47:45.079677 kubelet[2212]: I0513 23:47:45.076865 2212 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:47:45.079677 kubelet[2212]: W0513 23:47:45.077060 2212 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:47:45.079677 kubelet[2212]: I0513 23:47:45.078140 2212 server.go:1264] "Started kubelet" May 13 23:47:45.079677 kubelet[2212]: I0513 23:47:45.078271 2212 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:47:45.079677 kubelet[2212]: I0513 23:47:45.079517 2212 server.go:455] "Adding debug handlers to kubelet server" May 13 23:47:45.086657 kubelet[2212]: I0513 23:47:45.082013 2212 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:47:45.088364 kubelet[2212]: I0513 23:47:45.088296 2212 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:47:45.088570 kubelet[2212]: I0513 23:47:45.088546 2212 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:47:45.088707 kubelet[2212]: I0513 23:47:45.088688 2212 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 23:47:45.088850 kubelet[2212]: I0513 23:47:45.088814 2212 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:47:45.089501 kubelet[2212]: E0513 23:47:45.089106 2212 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.122:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.122:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3b03219fffb4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 23:47:45.078116276 +0000 UTC m=+1.020809249,LastTimestamp:2025-05-13 23:47:45.078116276 +0000 UTC m=+1.020809249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 23:47:45.089501 kubelet[2212]: E0513 23:47:45.089394 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="200ms" May 13 23:47:45.089856 kubelet[2212]: W0513 23:47:45.089784 2212 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused May 13 23:47:45.089856 kubelet[2212]: E0513 23:47:45.089819 2212 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused May 13 23:47:45.089984 kubelet[2212]: I0513 23:47:45.089966 2212 factory.go:221] Registration of the systemd container factory successfully May 13 23:47:45.090105 kubelet[2212]: I0513 23:47:45.090081 2212 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:47:45.091274 kubelet[2212]: E0513 23:47:45.091254 2212 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:47:45.091573 kubelet[2212]: I0513 23:47:45.091552 2212 factory.go:221] Registration of the containerd container factory successfully May 13 23:47:45.091724 kubelet[2212]: I0513 23:47:45.091697 2212 reconciler.go:26] "Reconciler: start to sync state" May 13 23:47:45.102811 kubelet[2212]: I0513 23:47:45.102770 2212 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:47:45.104370 kubelet[2212]: I0513 23:47:45.104339 2212 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:47:45.104763 kubelet[2212]: I0513 23:47:45.104388 2212 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:47:45.104763 kubelet[2212]: I0513 23:47:45.104411 2212 kubelet.go:2337] "Starting kubelet main sync loop" May 13 23:47:45.104763 kubelet[2212]: E0513 23:47:45.104576 2212 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:47:45.106039 kubelet[2212]: W0513 23:47:45.105107 2212 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused May 13 23:47:45.106039 kubelet[2212]: E0513 23:47:45.105146 2212 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused May 13 23:47:45.106139 kubelet[2212]: I0513 23:47:45.106053 2212 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:47:45.106139 kubelet[2212]: I0513 23:47:45.106064 2212 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:47:45.106139 kubelet[2212]: I0513 23:47:45.106085 2212 state_mem.go:36] "Initialized new in-memory state store" May 13 23:47:45.190484 kubelet[2212]: I0513 23:47:45.190452 2212 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 23:47:45.191070 kubelet[2212]: E0513 23:47:45.191043 2212 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" May 13 23:47:45.205235 kubelet[2212]: E0513 23:47:45.205196 2212 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 23:47:45.252241 kubelet[2212]: I0513 23:47:45.252223 2212 policy_none.go:49] "None policy: Start" May 13 23:47:45.252966 kubelet[2212]: I0513 23:47:45.252920 2212 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:47:45.253052 kubelet[2212]: I0513 23:47:45.253004 2212 state_mem.go:35] "Initializing new in-memory state store" May 13 23:47:45.260503 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:47:45.273727 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:47:45.277400 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:47:45.288027 kubelet[2212]: I0513 23:47:45.287982 2212 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:47:45.288374 kubelet[2212]: I0513 23:47:45.288217 2212 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:47:45.288746 kubelet[2212]: I0513 23:47:45.288723 2212 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:47:45.289765 kubelet[2212]: E0513 23:47:45.289739 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="400ms" May 13 23:47:45.290127 kubelet[2212]: E0513 23:47:45.289956 2212 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 23:47:45.393089 kubelet[2212]: I0513 23:47:45.393064 2212 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 23:47:45.393607 kubelet[2212]: E0513 23:47:45.393578 2212 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" May 13 23:47:45.405774 kubelet[2212]: I0513 23:47:45.405678 2212 topology_manager.go:215] "Topology Admit Handler" podUID="86a4b1aca86b13db9327680ba2885c38" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 23:47:45.406882 kubelet[2212]: I0513 23:47:45.406825 2212 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 23:47:45.408364 kubelet[2212]: I0513 23:47:45.408165 2212 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 23:47:45.413076 systemd[1]: Created slice kubepods-burstable-pod86a4b1aca86b13db9327680ba2885c38.slice - libcontainer container kubepods-burstable-pod86a4b1aca86b13db9327680ba2885c38.slice. May 13 23:47:45.426219 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 13 23:47:45.441110 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 13 23:47:45.493610 kubelet[2212]: I0513 23:47:45.493578 2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:45.493799 kubelet[2212]: I0513 23:47:45.493779 2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/86a4b1aca86b13db9327680ba2885c38-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"86a4b1aca86b13db9327680ba2885c38\") " pod="kube-system/kube-apiserver-localhost" May 13 23:47:45.493904 kubelet[2212]: I0513 23:47:45.493888 2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:45.494001 kubelet[2212]: I0513 23:47:45.493963 2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:45.494132 kubelet[2212]: I0513 23:47:45.493986 2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:45.494132 kubelet[2212]: I0513 23:47:45.494091 2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 23:47:45.494132 kubelet[2212]: I0513 23:47:45.494110 2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/86a4b1aca86b13db9327680ba2885c38-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"86a4b1aca86b13db9327680ba2885c38\") " pod="kube-system/kube-apiserver-localhost" May 13 23:47:45.494297 kubelet[2212]: I0513 23:47:45.494224 2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/86a4b1aca86b13db9327680ba2885c38-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"86a4b1aca86b13db9327680ba2885c38\") " pod="kube-system/kube-apiserver-localhost" May 13 23:47:45.494297 kubelet[2212]: I0513 23:47:45.494256 2212 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:45.690638 kubelet[2212]: E0513 23:47:45.690510 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="800ms" May 13 23:47:45.726061 kubelet[2212]: E0513 23:47:45.725841 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:45.729696 containerd[1449]: time="2025-05-13T23:47:45.729656912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:86a4b1aca86b13db9327680ba2885c38,Namespace:kube-system,Attempt:0,}" May 13 23:47:45.739783 kubelet[2212]: E0513 23:47:45.739756 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:45.740254 containerd[1449]: time="2025-05-13T23:47:45.740211846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 23:47:45.743723 kubelet[2212]: E0513 23:47:45.743699 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:45.744126 containerd[1449]: time="2025-05-13T23:47:45.744092188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 23:47:45.795763 kubelet[2212]: I0513 23:47:45.795551 2212 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 23:47:45.795906 kubelet[2212]: E0513 23:47:45.795880 2212 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" May 13 23:47:45.919820 kubelet[2212]: W0513 23:47:45.919743 2212 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused May 13 23:47:45.919820 kubelet[2212]: E0513 23:47:45.919818 2212 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused May 13 23:47:46.040061 kubelet[2212]: W0513 23:47:46.040011 2212 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused May 13 23:47:46.040061 kubelet[2212]: E0513 23:47:46.040056 2212 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused May 13 23:47:46.305035 kubelet[2212]: W0513 23:47:46.304856 2212 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused May 13 23:47:46.305035 kubelet[2212]: E0513 23:47:46.304918 2212 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused May 13 23:47:46.385804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2090692406.mount: Deactivated successfully. May 13 23:47:46.395166 containerd[1449]: time="2025-05-13T23:47:46.395001529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:47:46.398347 containerd[1449]: time="2025-05-13T23:47:46.398181708Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 13 23:47:46.399124 containerd[1449]: time="2025-05-13T23:47:46.399084774Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:47:46.401049 containerd[1449]: time="2025-05-13T23:47:46.400974214Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:47:46.402057 containerd[1449]: time="2025-05-13T23:47:46.402021468Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 23:47:46.403142 containerd[1449]: time="2025-05-13T23:47:46.403074319Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:47:46.403845 containerd[1449]: time="2025-05-13T23:47:46.403746732Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 23:47:46.406323 containerd[1449]: time="2025-05-13T23:47:46.406288956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:47:46.408626 containerd[1449]: time="2025-05-13T23:47:46.408587136Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 668.293629ms" May 13 23:47:46.409291 containerd[1449]: time="2025-05-13T23:47:46.409238282Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 665.074306ms" May 13 23:47:46.410137 containerd[1449]: time="2025-05-13T23:47:46.409985528Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 679.874704ms" May 13 23:47:46.495714 kubelet[2212]: E0513 23:47:46.495663 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="1.6s" May 13 23:47:46.558967 containerd[1449]: time="2025-05-13T23:47:46.558456503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:47:46.558967 containerd[1449]: time="2025-05-13T23:47:46.558581623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:47:46.558967 containerd[1449]: time="2025-05-13T23:47:46.558601371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:47:46.558967 containerd[1449]: time="2025-05-13T23:47:46.558688475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:47:46.559923 containerd[1449]: time="2025-05-13T23:47:46.559839744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:47:46.559969 containerd[1449]: time="2025-05-13T23:47:46.559761194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:47:46.560003 containerd[1449]: time="2025-05-13T23:47:46.559962906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:47:46.560096 containerd[1449]: time="2025-05-13T23:47:46.559897347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:47:46.560139 containerd[1449]: time="2025-05-13T23:47:46.560097620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:47:46.560232 containerd[1449]: time="2025-05-13T23:47:46.560197676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:47:46.560301 containerd[1449]: time="2025-05-13T23:47:46.559980335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:47:46.561460 containerd[1449]: time="2025-05-13T23:47:46.560704314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:47:46.574837 kubelet[2212]: W0513 23:47:46.574755 2212 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused May 13 23:47:46.574837 kubelet[2212]: E0513 23:47:46.574816 2212 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused May 13 23:47:46.581199 systemd[1]: Started cri-containerd-13ec0f0d37309be93c265ce4e88c6451b7dd9b3625fb35de90785c6b088bcbcc.scope - libcontainer container 13ec0f0d37309be93c265ce4e88c6451b7dd9b3625fb35de90785c6b088bcbcc. May 13 23:47:46.582322 systemd[1]: Started cri-containerd-3af21f516e70c41280de82012fcdb8c3c5e254152175eb18d35606020c2ba28b.scope - libcontainer container 3af21f516e70c41280de82012fcdb8c3c5e254152175eb18d35606020c2ba28b. May 13 23:47:46.583373 systemd[1]: Started cri-containerd-be2b4a73b4664ce9fc81ba7f67d5cac5987fae730851772ab34fa5e7c7cfb025.scope - libcontainer container be2b4a73b4664ce9fc81ba7f67d5cac5987fae730851772ab34fa5e7c7cfb025. May 13 23:47:46.598547 kubelet[2212]: I0513 23:47:46.598466 2212 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 23:47:46.599336 kubelet[2212]: E0513 23:47:46.599281 2212 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" May 13 23:47:46.619373 containerd[1449]: time="2025-05-13T23:47:46.619319628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:86a4b1aca86b13db9327680ba2885c38,Namespace:kube-system,Attempt:0,} returns sandbox id \"3af21f516e70c41280de82012fcdb8c3c5e254152175eb18d35606020c2ba28b\"" May 13 23:47:46.619923 containerd[1449]: time="2025-05-13T23:47:46.619891904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"be2b4a73b4664ce9fc81ba7f67d5cac5987fae730851772ab34fa5e7c7cfb025\"" May 13 23:47:46.621083 kubelet[2212]: E0513 23:47:46.621056 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:46.622232 kubelet[2212]: E0513 23:47:46.621363 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:46.625626 containerd[1449]: time="2025-05-13T23:47:46.625593601Z" level=info msg="CreateContainer within sandbox \"be2b4a73b4664ce9fc81ba7f67d5cac5987fae730851772ab34fa5e7c7cfb025\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:47:46.625965 containerd[1449]: time="2025-05-13T23:47:46.625668474Z" level=info msg="CreateContainer within sandbox \"3af21f516e70c41280de82012fcdb8c3c5e254152175eb18d35606020c2ba28b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:47:46.629301 containerd[1449]: time="2025-05-13T23:47:46.629260511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"13ec0f0d37309be93c265ce4e88c6451b7dd9b3625fb35de90785c6b088bcbcc\"" May 13 23:47:46.629824 kubelet[2212]: E0513 23:47:46.629796 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:46.632205 containerd[1449]: time="2025-05-13T23:47:46.632173700Z" level=info msg="CreateContainer within sandbox \"13ec0f0d37309be93c265ce4e88c6451b7dd9b3625fb35de90785c6b088bcbcc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:47:46.644962 containerd[1449]: time="2025-05-13T23:47:46.644917442Z" level=info msg="CreateContainer within sandbox \"be2b4a73b4664ce9fc81ba7f67d5cac5987fae730851772ab34fa5e7c7cfb025\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ad1936d3ba720e87c6702e635e522106eba4fabbade2eaa300f0b0995bd892fd\"" May 13 23:47:46.645677 containerd[1449]: time="2025-05-13T23:47:46.645651855Z" level=info msg="StartContainer for \"ad1936d3ba720e87c6702e635e522106eba4fabbade2eaa300f0b0995bd892fd\"" May 13 23:47:46.648350 containerd[1449]: time="2025-05-13T23:47:46.648223661Z" level=info msg="CreateContainer within sandbox \"3af21f516e70c41280de82012fcdb8c3c5e254152175eb18d35606020c2ba28b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"54c1ffefa82102aa95aead330d30ff3e523e8b1368d3ff762768a7731360d522\"" May 13 23:47:46.648802 containerd[1449]: time="2025-05-13T23:47:46.648744530Z" level=info msg="StartContainer for \"54c1ffefa82102aa95aead330d30ff3e523e8b1368d3ff762768a7731360d522\"" May 13 23:47:46.654926 containerd[1449]: time="2025-05-13T23:47:46.654885588Z" level=info msg="CreateContainer within sandbox \"13ec0f0d37309be93c265ce4e88c6451b7dd9b3625fb35de90785c6b088bcbcc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3bd26bc4ced34e374f2d9b3b853ab29f2acdc42a6da69a86db4d94f51b485847\"" May 13 23:47:46.655666 containerd[1449]: time="2025-05-13T23:47:46.655639828Z" level=info msg="StartContainer for \"3bd26bc4ced34e374f2d9b3b853ab29f2acdc42a6da69a86db4d94f51b485847\"" May 13 23:47:46.672669 systemd[1]: Started cri-containerd-ad1936d3ba720e87c6702e635e522106eba4fabbade2eaa300f0b0995bd892fd.scope - libcontainer container ad1936d3ba720e87c6702e635e522106eba4fabbade2eaa300f0b0995bd892fd. May 13 23:47:46.674953 systemd[1]: Started cri-containerd-54c1ffefa82102aa95aead330d30ff3e523e8b1368d3ff762768a7731360d522.scope - libcontainer container 54c1ffefa82102aa95aead330d30ff3e523e8b1368d3ff762768a7731360d522. May 13 23:47:46.687129 systemd[1]: Started cri-containerd-3bd26bc4ced34e374f2d9b3b853ab29f2acdc42a6da69a86db4d94f51b485847.scope - libcontainer container 3bd26bc4ced34e374f2d9b3b853ab29f2acdc42a6da69a86db4d94f51b485847. May 13 23:47:46.720493 containerd[1449]: time="2025-05-13T23:47:46.720356225Z" level=info msg="StartContainer for \"54c1ffefa82102aa95aead330d30ff3e523e8b1368d3ff762768a7731360d522\" returns successfully" May 13 23:47:46.720493 containerd[1449]: time="2025-05-13T23:47:46.720451884Z" level=info msg="StartContainer for \"ad1936d3ba720e87c6702e635e522106eba4fabbade2eaa300f0b0995bd892fd\" returns successfully" May 13 23:47:46.771116 containerd[1449]: time="2025-05-13T23:47:46.771069679Z" level=info msg="StartContainer for \"3bd26bc4ced34e374f2d9b3b853ab29f2acdc42a6da69a86db4d94f51b485847\" returns successfully" May 13 23:47:47.116274 kubelet[2212]: E0513 23:47:47.116202 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:47.116423 kubelet[2212]: E0513 23:47:47.116406 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:47.117811 kubelet[2212]: E0513 23:47:47.117791 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:48.119774 kubelet[2212]: E0513 23:47:48.119699 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:48.120484 kubelet[2212]: E0513 23:47:48.120417 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:48.201581 kubelet[2212]: I0513 23:47:48.201532 2212 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 23:47:48.525880 kubelet[2212]: E0513 23:47:48.525804 2212 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 23:47:48.584024 kubelet[2212]: I0513 23:47:48.582105 2212 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 23:47:49.075400 kubelet[2212]: I0513 23:47:49.075168 2212 apiserver.go:52] "Watching apiserver" May 13 23:47:49.089732 kubelet[2212]: I0513 23:47:49.089687 2212 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:47:50.557267 systemd[1]: Reload requested from client PID 2491 ('systemctl') (unit session-5.scope)... May 13 23:47:50.557281 systemd[1]: Reloading... May 13 23:47:50.678057 zram_generator::config[2537]: No configuration found. May 13 23:47:50.780864 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:47:50.870060 systemd[1]: Reloading finished in 312 ms. May 13 23:47:50.893323 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:50.904431 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:47:50.904687 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:50.904748 systemd[1]: kubelet.service: Consumed 1.413s CPU time, 116.3M memory peak. May 13 23:47:50.914407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:47:51.015176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:47:51.018779 (kubelet)[2577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:47:51.061163 kubelet[2577]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:47:51.061163 kubelet[2577]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:47:51.061163 kubelet[2577]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:47:51.062068 kubelet[2577]: I0513 23:47:51.061366 2577 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:47:51.066563 kubelet[2577]: I0513 23:47:51.066536 2577 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 23:47:51.067035 kubelet[2577]: I0513 23:47:51.066647 2577 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:47:51.067035 kubelet[2577]: I0513 23:47:51.066829 2577 server.go:927] "Client rotation is on, will bootstrap in background" May 13 23:47:51.069118 kubelet[2577]: I0513 23:47:51.069093 2577 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:47:51.070654 kubelet[2577]: I0513 23:47:51.070614 2577 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:47:51.078595 kubelet[2577]: I0513 23:47:51.078559 2577 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:47:51.078807 kubelet[2577]: I0513 23:47:51.078766 2577 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:47:51.078977 kubelet[2577]: I0513 23:47:51.078800 2577 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 23:47:51.078977 kubelet[2577]: I0513 23:47:51.078972 2577 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:47:51.079122 kubelet[2577]: I0513 23:47:51.078981 2577 container_manager_linux.go:301] "Creating device plugin manager" May 13 23:47:51.079122 kubelet[2577]: I0513 23:47:51.079032 2577 state_mem.go:36] "Initialized new in-memory state store" May 13 23:47:51.079171 kubelet[2577]: I0513 23:47:51.079140 2577 kubelet.go:400] "Attempting to sync node with API server" May 13 23:47:51.079171 kubelet[2577]: I0513 23:47:51.079155 2577 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:47:51.079216 kubelet[2577]: I0513 23:47:51.079188 2577 kubelet.go:312] "Adding apiserver pod source" May 13 23:47:51.079216 kubelet[2577]: I0513 23:47:51.079203 2577 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:47:51.081811 kubelet[2577]: I0513 23:47:51.080042 2577 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 13 23:47:51.081811 kubelet[2577]: I0513 23:47:51.080211 2577 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:47:51.081811 kubelet[2577]: I0513 23:47:51.080587 2577 server.go:1264] "Started kubelet" May 13 23:47:51.082387 kubelet[2577]: I0513 23:47:51.082352 2577 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:47:51.083939 kubelet[2577]: I0513 23:47:51.083913 2577 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 23:47:51.084135 kubelet[2577]: I0513 23:47:51.084121 2577 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:47:51.084372 kubelet[2577]: I0513 23:47:51.084348 2577 reconciler.go:26] "Reconciler: start to sync state" May 13 23:47:51.084741 kubelet[2577]: I0513 23:47:51.084698 2577 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:47:51.085608 kubelet[2577]: I0513 23:47:51.085579 2577 server.go:455] "Adding debug handlers to kubelet server" May 13 23:47:51.085741 kubelet[2577]: I0513 23:47:51.085715 2577 factory.go:221] Registration of the systemd container factory successfully May 13 23:47:51.085835 kubelet[2577]: I0513 23:47:51.085813 2577 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:47:51.085982 kubelet[2577]: I0513 23:47:51.085913 2577 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:47:51.086279 kubelet[2577]: I0513 23:47:51.086265 2577 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:47:51.086557 kubelet[2577]: E0513 23:47:51.086527 2577 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:47:51.098980 kubelet[2577]: I0513 23:47:51.098917 2577 factory.go:221] Registration of the containerd container factory successfully May 13 23:47:51.103264 kubelet[2577]: I0513 23:47:51.103208 2577 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:47:51.104567 kubelet[2577]: I0513 23:47:51.104541 2577 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:47:51.104598 kubelet[2577]: I0513 23:47:51.104580 2577 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:47:51.104625 kubelet[2577]: I0513 23:47:51.104600 2577 kubelet.go:2337] "Starting kubelet main sync loop" May 13 23:47:51.104684 kubelet[2577]: E0513 23:47:51.104653 2577 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:47:51.140514 kubelet[2577]: I0513 23:47:51.140417 2577 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:47:51.140514 kubelet[2577]: I0513 23:47:51.140436 2577 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:47:51.140514 kubelet[2577]: I0513 23:47:51.140459 2577 state_mem.go:36] "Initialized new in-memory state store" May 13 23:47:51.141467 kubelet[2577]: I0513 23:47:51.141429 2577 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:47:51.141467 kubelet[2577]: I0513 23:47:51.141456 2577 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:47:51.141540 kubelet[2577]: I0513 23:47:51.141476 2577 policy_none.go:49] "None policy: Start" May 13 23:47:51.142204 kubelet[2577]: I0513 23:47:51.142187 2577 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:47:51.142261 kubelet[2577]: I0513 23:47:51.142215 2577 state_mem.go:35] "Initializing new in-memory state store" May 13 23:47:51.142353 kubelet[2577]: I0513 23:47:51.142339 2577 state_mem.go:75] "Updated machine memory state" May 13 23:47:51.146499 kubelet[2577]: I0513 23:47:51.146477 2577 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:47:51.146842 kubelet[2577]: I0513 23:47:51.146632 2577 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:47:51.146842 kubelet[2577]: I0513 23:47:51.146755 2577 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:47:51.188124 kubelet[2577]: I0513 23:47:51.188093 2577 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 23:47:51.194074 kubelet[2577]: I0513 23:47:51.194032 2577 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 23:47:51.194170 kubelet[2577]: I0513 23:47:51.194126 2577 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 23:47:51.205008 kubelet[2577]: I0513 23:47:51.204966 2577 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 23:47:51.205140 kubelet[2577]: I0513 23:47:51.205103 2577 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 23:47:51.205659 kubelet[2577]: I0513 23:47:51.205139 2577 topology_manager.go:215] "Topology Admit Handler" podUID="86a4b1aca86b13db9327680ba2885c38" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 23:47:51.386241 kubelet[2577]: I0513 23:47:51.386196 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:51.386442 kubelet[2577]: I0513 23:47:51.386421 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:51.386513 kubelet[2577]: I0513 23:47:51.386499 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:51.386619 kubelet[2577]: I0513 23:47:51.386605 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:51.386687 kubelet[2577]: I0513 23:47:51.386674 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/86a4b1aca86b13db9327680ba2885c38-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"86a4b1aca86b13db9327680ba2885c38\") " pod="kube-system/kube-apiserver-localhost" May 13 23:47:51.386781 kubelet[2577]: I0513 23:47:51.386765 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:47:51.386917 kubelet[2577]: I0513 23:47:51.386834 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 23:47:51.386917 kubelet[2577]: I0513 23:47:51.386858 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/86a4b1aca86b13db9327680ba2885c38-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"86a4b1aca86b13db9327680ba2885c38\") " pod="kube-system/kube-apiserver-localhost" May 13 23:47:51.386917 kubelet[2577]: I0513 23:47:51.386879 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/86a4b1aca86b13db9327680ba2885c38-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"86a4b1aca86b13db9327680ba2885c38\") " pod="kube-system/kube-apiserver-localhost" May 13 23:47:51.540256 kubelet[2577]: E0513 23:47:51.540217 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:51.540395 kubelet[2577]: E0513 23:47:51.540280 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:51.541078 kubelet[2577]: E0513 23:47:51.541048 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:52.080304 kubelet[2577]: I0513 23:47:52.080274 2577 apiserver.go:52] "Watching apiserver" May 13 23:47:52.085208 kubelet[2577]: I0513 23:47:52.085054 2577 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:47:52.132020 kubelet[2577]: E0513 23:47:52.129565 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:52.150316 kubelet[2577]: E0513 23:47:52.150281 2577 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 23:47:52.151234 kubelet[2577]: E0513 23:47:52.150546 2577 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 23:47:52.152908 kubelet[2577]: E0513 23:47:52.151376 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:52.153136 kubelet[2577]: E0513 23:47:52.151404 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:52.153207 kubelet[2577]: I0513 23:47:52.152055 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.152024544 podStartE2EDuration="1.152024544s" podCreationTimestamp="2025-05-13 23:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:47:52.151109345 +0000 UTC m=+1.129442387" watchObservedRunningTime="2025-05-13 23:47:52.152024544 +0000 UTC m=+1.130357586" May 13 23:47:52.177319 kubelet[2577]: I0513 23:47:52.177249 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.17723241 podStartE2EDuration="1.17723241s" podCreationTimestamp="2025-05-13 23:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:47:52.166113336 +0000 UTC m=+1.144446338" watchObservedRunningTime="2025-05-13 23:47:52.17723241 +0000 UTC m=+1.155565452" May 13 23:47:52.177491 kubelet[2577]: I0513 23:47:52.177402 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.17739701 podStartE2EDuration="1.17739701s" podCreationTimestamp="2025-05-13 23:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:47:52.17707673 +0000 UTC m=+1.155409772" watchObservedRunningTime="2025-05-13 23:47:52.17739701 +0000 UTC m=+1.155730012" May 13 23:47:52.479894 sudo[1594]: pam_unix(sudo:session): session closed for user root May 13 23:47:52.481223 sshd[1593]: Connection closed by 10.0.0.1 port 33238 May 13 23:47:52.481646 sshd-session[1590]: pam_unix(sshd:session): session closed for user core May 13 23:47:52.485708 systemd[1]: sshd@4-10.0.0.122:22-10.0.0.1:33238.service: Deactivated successfully. May 13 23:47:52.488021 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:47:52.488217 systemd[1]: session-5.scope: Consumed 6.170s CPU time, 260.8M memory peak. May 13 23:47:52.489439 systemd-logind[1434]: Session 5 logged out. Waiting for processes to exit. May 13 23:47:52.490579 systemd-logind[1434]: Removed session 5. May 13 23:47:53.129894 kubelet[2577]: E0513 23:47:53.129854 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:53.131030 kubelet[2577]: E0513 23:47:53.130537 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:47:55.208891 kubelet[2577]: E0513 23:47:55.208777 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 23:48:05.109216 update_engine[1436]: I20250513 23:48:05.108716 1436 update_attempter.cc:509] Updating boot flags... May 13 23:48:05.140404 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2651) May 13 23:48:05.179137 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2650) May 13 23:48:05.452494 kubelet[2577]: I0513 23:48:05.452149 2577 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:48:05.453504 kubelet[2577]: I0513 23:48:05.452707 2577 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:48:05.453546 containerd[1449]: time="2025-05-13T23:48:05.452520996Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:48:06.095264 kubelet[2577]: I0513 23:48:06.095202 2577 topology_manager.go:215] "Topology Admit Handler" podUID="1adaa2ad-72c9-48ed-938e-a102a4ca2df1" podNamespace="kube-system" podName="kube-proxy-9jfrc" May 13 23:48:06.109385 systemd[1]: Created slice kubepods-besteffort-pod1adaa2ad_72c9_48ed_938e_a102a4ca2df1.slice - libcontainer container kubepods-besteffort-pod1adaa2ad_72c9_48ed_938e_a102a4ca2df1.slice. May 13 23:48:06.112572 kubelet[2577]: I0513 23:48:06.110488 2577 topology_manager.go:215] "Topology Admit Handler" podUID="855b3a51-e0d1-47e6-935f-8b312b9abe04" podNamespace="kube-flannel" podName="kube-flannel-ds-t89zd" May 13 23:48:06.127578 systemd[1]: Created slice kubepods-burstable-pod855b3a51_e0d1_47e6_935f_8b312b9abe04.slice - libcontainer container kubepods-burstable-pod855b3a51_e0d1_47e6_935f_8b312b9abe04.slice. May 13 23:48:06.181063 kubelet[2577]: I0513 23:48:06.181019 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1adaa2ad-72c9-48ed-938e-a102a4ca2df1-kube-proxy\") pod \"kube-proxy-9jfrc\" (UID: \"1adaa2ad-72c9-48ed-938e-a102a4ca2df1\") " pod="kube-system/kube-proxy-9jfrc" May 13 23:48:06.181063 kubelet[2577]: I0513 23:48:06.181064 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1adaa2ad-72c9-48ed-938e-a102a4ca2df1-xtables-lock\") pod \"kube-proxy-9jfrc\" (UID: \"1adaa2ad-72c9-48ed-938e-a102a4ca2df1\") " pod="kube-system/kube-proxy-9jfrc" May 13 23:48:06.181253 kubelet[2577]: I0513 23:48:06.181084 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1adaa2ad-72c9-48ed-938e-a102a4ca2df1-lib-modules\") pod \"kube-proxy-9jfrc\" (UID: \"1adaa2ad-72c9-48ed-938e-a102a4ca2df1\") " pod="kube-system/kube-proxy-9jfrc" May 13 23:48:06.181253 kubelet[2577]: I0513 23:48:06.181130 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbbrj\" (UniqueName: \"kubernetes.io/projected/1adaa2ad-72c9-48ed-938e-a102a4ca2df1-kube-api-access-lbbrj\") pod \"kube-proxy-9jfrc\" (UID: \"1adaa2ad-72c9-48ed-938e-a102a4ca2df1\") " pod="kube-system/kube-proxy-9jfrc" May 13 23:48:06.181253 kubelet[2577]: I0513 23:48:06.181209 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/855b3a51-e0d1-47e6-935f-8b312b9abe04-run\") pod \"kube-flannel-ds-t89zd\" (UID: \"855b3a51-e0d1-47e6-935f-8b312b9abe04\") " pod="kube-flannel/kube-flannel-ds-t89zd" May 13 23:48:06.181253 kubelet[2577]: I0513 23:48:06.181238 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/855b3a51-e0d1-47e6-935f-8b312b9abe04-cni-plugin\") pod \"kube-flannel-ds-t89zd\" (UID: \"855b3a51-e0d1-47e6-935f-8b312b9abe04\") " pod="kube-flannel/kube-flannel-ds-t89zd" May 13 23:48:06.181779 kubelet[2577]: I0513 23:48:06.181257 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/855b3a51-e0d1-47e6-935f-8b312b9abe04-flannel-cfg\") pod \"kube-flannel-ds-t89zd\" (UID: \"855b3a51-e0d1-47e6-935f-8b312b9abe04\") " pod="kube-flannel/kube-flannel-ds-t89zd" May 13 23:48:06.181779 kubelet[2577]: I0513 23:48:06.181287 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/855b3a51-e0d1-47e6-935f-8b312b9abe04-xtables-lock\") pod \"kube-flannel-ds-t89zd\" (UID: \"855b3a51-e0d1-47e6-935f-8b312b9abe04\") " pod="kube-flannel/kube-flannel-ds-t89zd" May 13 23:48:06.181779 kubelet[2577]: I0513 23:48:06.181307 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/855b3a51-e0d1-47e6-935f-8b312b9abe04-cni\") pod \"kube-flannel-ds-t89zd\" (UID: \"855b3a51-e0d1-47e6-935f-8b312b9abe04\") " pod="kube-flannel/kube-flannel-ds-t89zd" May 13 23:48:06.181779 kubelet[2577]: I0513 23:48:06.181325 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8wcw\" (UniqueName: \"kubernetes.io/projected/855b3a51-e0d1-47e6-935f-8b312b9abe04-kube-api-access-c8wcw\") pod \"kube-flannel-ds-t89zd\" (UID: \"855b3a51-e0d1-47e6-935f-8b312b9abe04\") " pod="kube-flannel/kube-flannel-ds-t89zd" May 13 23:48:06.294290 kubelet[2577]: E0513 23:48:06.293986 2577 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 13 23:48:06.294290 kubelet[2577]: E0513 23:48:06.294040 2577 projected.go:200] Error preparing data for projected volume kube-api-access-c8wcw for pod kube-flannel/kube-flannel-ds-t89zd: configmap "kube-root-ca.crt" not found May 13 23:48:06.294290 kubelet[2577]: E0513 23:48:06.294105 2577 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/855b3a51-e0d1-47e6-935f-8b312b9abe04-kube-api-access-c8wcw podName:855b3a51-e0d1-47e6-935f-8b312b9abe04 nodeName:}" failed. No retries permitted until 2025-05-13 23:48:06.794084763 +0000 UTC m=+15.772417805 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c8wcw" (UniqueName: "kubernetes.io/projected/855b3a51-e0d1-47e6-935f-8b312b9abe04-kube-api-access-c8wcw") pod "kube-flannel-ds-t89zd" (UID: "855b3a51-e0d1-47e6-935f-8b312b9abe04") : configmap "kube-root-ca.crt" not found May 13 23:48:06.295570 kubelet[2577]: E0513 23:48:06.295543 2577 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 13 23:48:06.295570 kubelet[2577]: E0513 23:48:06.295572 2577 projected.go:200] Error preparing data for projected volume kube-api-access-lbbrj for pod kube-system/kube-proxy-9jfrc: configmap "kube-root-ca.crt" not found May 13 23:48:06.295680 kubelet[2577]: E0513 23:48:06.295618 2577 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1adaa2ad-72c9-48ed-938e-a102a4ca2df1-kube-api-access-lbbrj podName:1adaa2ad-72c9-48ed-938e-a102a4ca2df1 nodeName:}" failed. No retries permitted until 2025-05-13 23:48:06.795604002 +0000 UTC m=+15.773937044 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lbbrj" (UniqueName: "kubernetes.io/projected/1adaa2ad-72c9-48ed-938e-a102a4ca2df1-kube-api-access-lbbrj") pod "kube-proxy-9jfrc" (UID: "1adaa2ad-72c9-48ed-938e-a102a4ca2df1") : configmap "kube-root-ca.crt" not found May 13 23:48:07.021892 containerd[1449]: time="2025-05-13T23:48:07.021842848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9jfrc,Uid:1adaa2ad-72c9-48ed-938e-a102a4ca2df1,Namespace:kube-system,Attempt:0,}" May 13 23:48:07.032738 containerd[1449]: time="2025-05-13T23:48:07.032697685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-t89zd,Uid:855b3a51-e0d1-47e6-935f-8b312b9abe04,Namespace:kube-flannel,Attempt:0,}" May 13 23:48:07.049614 containerd[1449]: time="2025-05-13T23:48:07.048511841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:48:07.049614 containerd[1449]: time="2025-05-13T23:48:07.049044001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:48:07.049614 containerd[1449]: time="2025-05-13T23:48:07.049058161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:48:07.049614 containerd[1449]: time="2025-05-13T23:48:07.049162601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:48:07.078269 systemd[1]: Started cri-containerd-65b4cccc4a9ed1268171b92a23579d4029830eb2a97b09c3420a467f7b6af3fb.scope - libcontainer container 65b4cccc4a9ed1268171b92a23579d4029830eb2a97b09c3420a467f7b6af3fb. May 13 23:48:07.080222 containerd[1449]: time="2025-05-13T23:48:07.079936193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:48:07.080333 containerd[1449]: time="2025-05-13T23:48:07.080027393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:48:07.080333 containerd[1449]: time="2025-05-13T23:48:07.080054633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:48:07.080333 containerd[1449]: time="2025-05-13T23:48:07.080152913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:48:07.104206 systemd[1]: Started cri-containerd-bec8f34e9bb67f1f411af9474300daf25547d312d2964ed9e7a7de765c35a00e.scope - libcontainer container bec8f34e9bb67f1f411af9474300daf25547d312d2964ed9e7a7de765c35a00e. May 13 23:48:07.105129 containerd[1449]: time="2025-05-13T23:48:07.105095306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9jfrc,Uid:1adaa2ad-72c9-48ed-938e-a102a4ca2df1,Namespace:kube-system,Attempt:0,} returns sandbox id \"65b4cccc4a9ed1268171b92a23579d4029830eb2a97b09c3420a467f7b6af3fb\"" May 13 23:48:07.109298 containerd[1449]: time="2025-05-13T23:48:07.109203705Z" level=info msg="CreateContainer within sandbox \"65b4cccc4a9ed1268171b92a23579d4029830eb2a97b09c3420a467f7b6af3fb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:48:07.127130 containerd[1449]: time="2025-05-13T23:48:07.127082141Z" level=info msg="CreateContainer within sandbox \"65b4cccc4a9ed1268171b92a23579d4029830eb2a97b09c3420a467f7b6af3fb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"22cbf280b352bb5e8c6c7a4047fd7cc97947fe1d85775ae1694815439d0a190a\"" May 13 23:48:07.127783 containerd[1449]: time="2025-05-13T23:48:07.127741221Z" level=info msg="StartContainer for \"22cbf280b352bb5e8c6c7a4047fd7cc97947fe1d85775ae1694815439d0a190a\"" May 13 23:48:07.143642 containerd[1449]: time="2025-05-13T23:48:07.143587697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-t89zd,Uid:855b3a51-e0d1-47e6-935f-8b312b9abe04,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"bec8f34e9bb67f1f411af9474300daf25547d312d2964ed9e7a7de765c35a00e\"" May 13 23:48:07.145882 containerd[1449]: time="2025-05-13T23:48:07.145838976Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 13 23:48:07.162225 systemd[1]: Started cri-containerd-22cbf280b352bb5e8c6c7a4047fd7cc97947fe1d85775ae1694815439d0a190a.scope - libcontainer container 22cbf280b352bb5e8c6c7a4047fd7cc97947fe1d85775ae1694815439d0a190a. May 13 23:48:07.199491 containerd[1449]: time="2025-05-13T23:48:07.196357883Z" level=info msg="StartContainer for \"22cbf280b352bb5e8c6c7a4047fd7cc97947fe1d85775ae1694815439d0a190a\" returns successfully" May 13 23:48:08.436064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount836932930.mount: Deactivated successfully. May 13 23:48:08.476009 containerd[1449]: time="2025-05-13T23:48:08.475221642Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" May 13 23:48:08.476400 containerd[1449]: time="2025-05-13T23:48:08.476357441Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:08.478100 containerd[1449]: time="2025-05-13T23:48:08.478064521Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:08.479235 containerd[1449]: time="2025-05-13T23:48:08.479199401Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.333306145s" May 13 23:48:08.479235 containerd[1449]: time="2025-05-13T23:48:08.479234961Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" May 13 23:48:08.479887 containerd[1449]: time="2025-05-13T23:48:08.479750241Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:08.482132 containerd[1449]: time="2025-05-13T23:48:08.482088080Z" level=info msg="CreateContainer within sandbox \"bec8f34e9bb67f1f411af9474300daf25547d312d2964ed9e7a7de765c35a00e\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 13 23:48:08.493905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3919218678.mount: Deactivated successfully. May 13 23:48:08.496993 containerd[1449]: time="2025-05-13T23:48:08.496947156Z" level=info msg="CreateContainer within sandbox \"bec8f34e9bb67f1f411af9474300daf25547d312d2964ed9e7a7de765c35a00e\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"a3eb415ea90120d152fc2d667ec6c91e1aa6b61afdf99e6b4739ea59ee626fa9\"" May 13 23:48:08.498067 containerd[1449]: time="2025-05-13T23:48:08.497533756Z" level=info msg="StartContainer for \"a3eb415ea90120d152fc2d667ec6c91e1aa6b61afdf99e6b4739ea59ee626fa9\"" May 13 23:48:08.525192 systemd[1]: Started cri-containerd-a3eb415ea90120d152fc2d667ec6c91e1aa6b61afdf99e6b4739ea59ee626fa9.scope - libcontainer container a3eb415ea90120d152fc2d667ec6c91e1aa6b61afdf99e6b4739ea59ee626fa9. May 13 23:48:08.549682 containerd[1449]: time="2025-05-13T23:48:08.549570624Z" level=info msg="StartContainer for \"a3eb415ea90120d152fc2d667ec6c91e1aa6b61afdf99e6b4739ea59ee626fa9\" returns successfully" May 13 23:48:08.558207 systemd[1]: cri-containerd-a3eb415ea90120d152fc2d667ec6c91e1aa6b61afdf99e6b4739ea59ee626fa9.scope: Deactivated successfully. May 13 23:48:08.619521 containerd[1449]: time="2025-05-13T23:48:08.619275967Z" level=info msg="shim disconnected" id=a3eb415ea90120d152fc2d667ec6c91e1aa6b61afdf99e6b4739ea59ee626fa9 namespace=k8s.io May 13 23:48:08.619521 containerd[1449]: time="2025-05-13T23:48:08.619340327Z" level=warning msg="cleaning up after shim disconnected" id=a3eb415ea90120d152fc2d667ec6c91e1aa6b61afdf99e6b4739ea59ee626fa9 namespace=k8s.io May 13 23:48:08.619521 containerd[1449]: time="2025-05-13T23:48:08.619348847Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:48:09.168928 containerd[1449]: time="2025-05-13T23:48:09.168855755Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 13 23:48:09.182079 kubelet[2577]: I0513 23:48:09.181887 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9jfrc" podStartSLOduration=3.181868111 podStartE2EDuration="3.181868111s" podCreationTimestamp="2025-05-13 23:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:48:08.183096993 +0000 UTC m=+17.161430075" watchObservedRunningTime="2025-05-13 23:48:09.181868111 +0000 UTC m=+18.160201153" May 13 23:48:10.245559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount546815841.mount: Deactivated successfully. May 13 23:48:10.857027 containerd[1449]: time="2025-05-13T23:48:10.855895531Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:10.858356 containerd[1449]: time="2025-05-13T23:48:10.858308011Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" May 13 23:48:10.862967 containerd[1449]: time="2025-05-13T23:48:10.862929970Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:10.865974 containerd[1449]: time="2025-05-13T23:48:10.865940849Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:10.873478 containerd[1449]: time="2025-05-13T23:48:10.873444447Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.704508613s" May 13 23:48:10.873661 containerd[1449]: time="2025-05-13T23:48:10.873638287Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" May 13 23:48:10.902707 containerd[1449]: time="2025-05-13T23:48:10.902663041Z" level=info msg="CreateContainer within sandbox \"bec8f34e9bb67f1f411af9474300daf25547d312d2964ed9e7a7de765c35a00e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 23:48:10.941214 containerd[1449]: time="2025-05-13T23:48:10.941155272Z" level=info msg="CreateContainer within sandbox \"bec8f34e9bb67f1f411af9474300daf25547d312d2964ed9e7a7de765c35a00e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b57caf5ab85656c9a0c0c6b5e4685d0c9f32625ca7ef9477e125b85ed2b5d8a4\"" May 13 23:48:10.941971 containerd[1449]: time="2025-05-13T23:48:10.941940232Z" level=info msg="StartContainer for \"b57caf5ab85656c9a0c0c6b5e4685d0c9f32625ca7ef9477e125b85ed2b5d8a4\"" May 13 23:48:10.967167 systemd[1]: Started cri-containerd-b57caf5ab85656c9a0c0c6b5e4685d0c9f32625ca7ef9477e125b85ed2b5d8a4.scope - libcontainer container b57caf5ab85656c9a0c0c6b5e4685d0c9f32625ca7ef9477e125b85ed2b5d8a4. May 13 23:48:10.992543 containerd[1449]: time="2025-05-13T23:48:10.992498581Z" level=info msg="StartContainer for \"b57caf5ab85656c9a0c0c6b5e4685d0c9f32625ca7ef9477e125b85ed2b5d8a4\" returns successfully" May 13 23:48:11.006962 systemd[1]: cri-containerd-b57caf5ab85656c9a0c0c6b5e4685d0c9f32625ca7ef9477e125b85ed2b5d8a4.scope: Deactivated successfully. May 13 23:48:11.033938 containerd[1449]: time="2025-05-13T23:48:11.033873452Z" level=info msg="shim disconnected" id=b57caf5ab85656c9a0c0c6b5e4685d0c9f32625ca7ef9477e125b85ed2b5d8a4 namespace=k8s.io May 13 23:48:11.034427 containerd[1449]: time="2025-05-13T23:48:11.034269332Z" level=warning msg="cleaning up after shim disconnected" id=b57caf5ab85656c9a0c0c6b5e4685d0c9f32625ca7ef9477e125b85ed2b5d8a4 namespace=k8s.io May 13 23:48:11.034427 containerd[1449]: time="2025-05-13T23:48:11.034288692Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:48:11.098456 kubelet[2577]: I0513 23:48:11.098216 2577 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 23:48:11.122664 kubelet[2577]: I0513 23:48:11.122546 2577 topology_manager.go:215] "Topology Admit Handler" podUID="281c65e2-64e6-4c37-bd14-46057801235c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-47rc7" May 13 23:48:11.122772 kubelet[2577]: I0513 23:48:11.122685 2577 topology_manager.go:215] "Topology Admit Handler" podUID="4d0a926a-1c49-4191-ab71-1ae7fc248341" podNamespace="kube-system" podName="coredns-7db6d8ff4d-x6rx8" May 13 23:48:11.135978 systemd[1]: Created slice kubepods-burstable-pod4d0a926a_1c49_4191_ab71_1ae7fc248341.slice - libcontainer container kubepods-burstable-pod4d0a926a_1c49_4191_ab71_1ae7fc248341.slice. May 13 23:48:11.141366 systemd[1]: Created slice kubepods-burstable-pod281c65e2_64e6_4c37_bd14_46057801235c.slice - libcontainer container kubepods-burstable-pod281c65e2_64e6_4c37_bd14_46057801235c.slice. May 13 23:48:11.153254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b57caf5ab85656c9a0c0c6b5e4685d0c9f32625ca7ef9477e125b85ed2b5d8a4-rootfs.mount: Deactivated successfully. May 13 23:48:11.202980 containerd[1449]: time="2025-05-13T23:48:11.202878136Z" level=info msg="CreateContainer within sandbox \"bec8f34e9bb67f1f411af9474300daf25547d312d2964ed9e7a7de765c35a00e\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 13 23:48:11.216487 kubelet[2577]: I0513 23:48:11.215976 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xppvr\" (UniqueName: \"kubernetes.io/projected/281c65e2-64e6-4c37-bd14-46057801235c-kube-api-access-xppvr\") pod \"coredns-7db6d8ff4d-47rc7\" (UID: \"281c65e2-64e6-4c37-bd14-46057801235c\") " pod="kube-system/coredns-7db6d8ff4d-47rc7" May 13 23:48:11.217552 kubelet[2577]: I0513 23:48:11.216721 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/281c65e2-64e6-4c37-bd14-46057801235c-config-volume\") pod \"coredns-7db6d8ff4d-47rc7\" (UID: \"281c65e2-64e6-4c37-bd14-46057801235c\") " pod="kube-system/coredns-7db6d8ff4d-47rc7" May 13 23:48:11.217552 kubelet[2577]: I0513 23:48:11.216789 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d0a926a-1c49-4191-ab71-1ae7fc248341-config-volume\") pod \"coredns-7db6d8ff4d-x6rx8\" (UID: \"4d0a926a-1c49-4191-ab71-1ae7fc248341\") " pod="kube-system/coredns-7db6d8ff4d-x6rx8" May 13 23:48:11.217552 kubelet[2577]: I0513 23:48:11.216807 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4rjk\" (UniqueName: \"kubernetes.io/projected/4d0a926a-1c49-4191-ab71-1ae7fc248341-kube-api-access-g4rjk\") pod \"coredns-7db6d8ff4d-x6rx8\" (UID: \"4d0a926a-1c49-4191-ab71-1ae7fc248341\") " pod="kube-system/coredns-7db6d8ff4d-x6rx8" May 13 23:48:11.230424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2850611447.mount: Deactivated successfully. May 13 23:48:11.233174 containerd[1449]: time="2025-05-13T23:48:11.233122330Z" level=info msg="CreateContainer within sandbox \"bec8f34e9bb67f1f411af9474300daf25547d312d2964ed9e7a7de765c35a00e\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"4379d745c9b163a94110fcfdbcb5e3d383a92ca0028b470d522190ab18440279\"" May 13 23:48:11.237564 containerd[1449]: time="2025-05-13T23:48:11.237291729Z" level=info msg="StartContainer for \"4379d745c9b163a94110fcfdbcb5e3d383a92ca0028b470d522190ab18440279\"" May 13 23:48:11.262225 systemd[1]: Started cri-containerd-4379d745c9b163a94110fcfdbcb5e3d383a92ca0028b470d522190ab18440279.scope - libcontainer container 4379d745c9b163a94110fcfdbcb5e3d383a92ca0028b470d522190ab18440279. May 13 23:48:11.298704 containerd[1449]: time="2025-05-13T23:48:11.298651236Z" level=info msg="StartContainer for \"4379d745c9b163a94110fcfdbcb5e3d383a92ca0028b470d522190ab18440279\" returns successfully" May 13 23:48:11.444543 containerd[1449]: time="2025-05-13T23:48:11.444078965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x6rx8,Uid:4d0a926a-1c49-4191-ab71-1ae7fc248341,Namespace:kube-system,Attempt:0,}" May 13 23:48:11.445767 containerd[1449]: time="2025-05-13T23:48:11.445736725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-47rc7,Uid:281c65e2-64e6-4c37-bd14-46057801235c,Namespace:kube-system,Attempt:0,}" May 13 23:48:11.589417 containerd[1449]: time="2025-05-13T23:48:11.589295494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x6rx8,Uid:4d0a926a-1c49-4191-ab71-1ae7fc248341,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"139c8d213469a7bc8336f5a7ccee4b648735027268bc72fbdb2fdf2dfce0c2b6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 23:48:11.590352 kubelet[2577]: E0513 23:48:11.589814 2577 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"139c8d213469a7bc8336f5a7ccee4b648735027268bc72fbdb2fdf2dfce0c2b6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 23:48:11.590352 kubelet[2577]: E0513 23:48:11.589918 2577 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"139c8d213469a7bc8336f5a7ccee4b648735027268bc72fbdb2fdf2dfce0c2b6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-x6rx8" May 13 23:48:11.590352 kubelet[2577]: E0513 23:48:11.589939 2577 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"139c8d213469a7bc8336f5a7ccee4b648735027268bc72fbdb2fdf2dfce0c2b6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-x6rx8" May 13 23:48:11.590352 kubelet[2577]: E0513 23:48:11.589984 2577 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-x6rx8_kube-system(4d0a926a-1c49-4191-ab71-1ae7fc248341)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-x6rx8_kube-system(4d0a926a-1c49-4191-ab71-1ae7fc248341)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"139c8d213469a7bc8336f5a7ccee4b648735027268bc72fbdb2fdf2dfce0c2b6\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-x6rx8" podUID="4d0a926a-1c49-4191-ab71-1ae7fc248341" May 13 23:48:11.590556 containerd[1449]: time="2025-05-13T23:48:11.590520974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-47rc7,Uid:281c65e2-64e6-4c37-bd14-46057801235c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a6071a468589c476e55f8c713b7bb888f8fc16919dfb36f792854872ef04229f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 23:48:11.591818 kubelet[2577]: E0513 23:48:11.591316 2577 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6071a468589c476e55f8c713b7bb888f8fc16919dfb36f792854872ef04229f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 23:48:11.591818 kubelet[2577]: E0513 23:48:11.591547 2577 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6071a468589c476e55f8c713b7bb888f8fc16919dfb36f792854872ef04229f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-47rc7" May 13 23:48:11.591818 kubelet[2577]: E0513 23:48:11.591568 2577 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6071a468589c476e55f8c713b7bb888f8fc16919dfb36f792854872ef04229f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-47rc7" May 13 23:48:11.591818 kubelet[2577]: E0513 23:48:11.591695 2577 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-47rc7_kube-system(281c65e2-64e6-4c37-bd14-46057801235c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-47rc7_kube-system(281c65e2-64e6-4c37-bd14-46057801235c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6071a468589c476e55f8c713b7bb888f8fc16919dfb36f792854872ef04229f\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-47rc7" podUID="281c65e2-64e6-4c37-bd14-46057801235c" May 13 23:48:12.399439 systemd-networkd[1367]: flannel.1: Link UP May 13 23:48:12.399448 systemd-networkd[1367]: flannel.1: Gained carrier May 13 23:48:13.503264 systemd-networkd[1367]: flannel.1: Gained IPv6LL May 13 23:48:16.827307 systemd[1]: Started sshd@5-10.0.0.122:22-10.0.0.1:53312.service - OpenSSH per-connection server daemon (10.0.0.1:53312). May 13 23:48:16.880872 sshd[3232]: Accepted publickey for core from 10.0.0.1 port 53312 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:48:16.884290 sshd-session[3232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:16.888756 systemd-logind[1434]: New session 6 of user core. May 13 23:48:16.896177 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:48:17.020963 sshd[3234]: Connection closed by 10.0.0.1 port 53312 May 13 23:48:17.021377 sshd-session[3232]: pam_unix(sshd:session): session closed for user core May 13 23:48:17.024406 systemd[1]: sshd@5-10.0.0.122:22-10.0.0.1:53312.service: Deactivated successfully. May 13 23:48:17.026342 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:48:17.027974 systemd-logind[1434]: Session 6 logged out. Waiting for processes to exit. May 13 23:48:17.029389 systemd-logind[1434]: Removed session 6. May 13 23:48:22.032848 systemd[1]: Started sshd@6-10.0.0.122:22-10.0.0.1:53328.service - OpenSSH per-connection server daemon (10.0.0.1:53328). May 13 23:48:22.086797 sshd[3276]: Accepted publickey for core from 10.0.0.1 port 53328 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:48:22.088114 sshd-session[3276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:22.092759 systemd-logind[1434]: New session 7 of user core. May 13 23:48:22.103199 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:48:22.105658 containerd[1449]: time="2025-05-13T23:48:22.105623499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-47rc7,Uid:281c65e2-64e6-4c37-bd14-46057801235c,Namespace:kube-system,Attempt:0,}" May 13 23:48:22.145460 systemd-networkd[1367]: cni0: Link UP May 13 23:48:22.145467 systemd-networkd[1367]: cni0: Gained carrier May 13 23:48:22.145679 systemd-networkd[1367]: cni0: Lost carrier May 13 23:48:22.153293 kernel: cni0: port 1(veth93684719) entered blocking state May 13 23:48:22.153387 kernel: cni0: port 1(veth93684719) entered disabled state May 13 23:48:22.153402 kernel: veth93684719: entered allmulticast mode May 13 23:48:22.153418 kernel: veth93684719: entered promiscuous mode May 13 23:48:22.153434 kernel: cni0: port 1(veth93684719) entered blocking state May 13 23:48:22.154079 kernel: cni0: port 1(veth93684719) entered forwarding state May 13 23:48:22.158005 kernel: cni0: port 1(veth93684719) entered disabled state May 13 23:48:22.158204 systemd-networkd[1367]: veth93684719: Link UP May 13 23:48:22.173046 kernel: cni0: port 1(veth93684719) entered blocking state May 13 23:48:22.173144 kernel: cni0: port 1(veth93684719) entered forwarding state May 13 23:48:22.172761 systemd-networkd[1367]: veth93684719: Gained carrier May 13 23:48:22.173049 systemd-networkd[1367]: cni0: Gained carrier May 13 23:48:22.176084 containerd[1449]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000014938), "name":"cbr0", "type":"bridge"} May 13 23:48:22.176084 containerd[1449]: delegateAdd: netconf sent to delegate plugin: May 13 23:48:22.203752 containerd[1449]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-13T23:48:22.203176726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:48:22.203752 containerd[1449]: time="2025-05-13T23:48:22.203234246Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:48:22.203752 containerd[1449]: time="2025-05-13T23:48:22.203244766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:48:22.203752 containerd[1449]: time="2025-05-13T23:48:22.203318926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:48:22.230153 systemd[1]: Started cri-containerd-5a22e8c8057b28fb54baa0459636dccc0fe549d99e829da6f8ddf96406673fa9.scope - libcontainer container 5a22e8c8057b28fb54baa0459636dccc0fe549d99e829da6f8ddf96406673fa9. May 13 23:48:22.244893 systemd-resolved[1370]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:48:22.259220 sshd[3278]: Connection closed by 10.0.0.1 port 53328 May 13 23:48:22.259904 sshd-session[3276]: pam_unix(sshd:session): session closed for user core May 13 23:48:22.264741 systemd[1]: sshd@6-10.0.0.122:22-10.0.0.1:53328.service: Deactivated successfully. May 13 23:48:22.266604 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:48:22.270358 systemd-logind[1434]: Session 7 logged out. Waiting for processes to exit. May 13 23:48:22.272197 systemd-logind[1434]: Removed session 7. May 13 23:48:22.273576 containerd[1449]: time="2025-05-13T23:48:22.273530516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-47rc7,Uid:281c65e2-64e6-4c37-bd14-46057801235c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a22e8c8057b28fb54baa0459636dccc0fe549d99e829da6f8ddf96406673fa9\"" May 13 23:48:22.278358 containerd[1449]: time="2025-05-13T23:48:22.278300356Z" level=info msg="CreateContainer within sandbox \"5a22e8c8057b28fb54baa0459636dccc0fe549d99e829da6f8ddf96406673fa9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:48:22.293452 containerd[1449]: time="2025-05-13T23:48:22.293328594Z" level=info msg="CreateContainer within sandbox \"5a22e8c8057b28fb54baa0459636dccc0fe549d99e829da6f8ddf96406673fa9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f74ea6b64a56ba232a39e61a230002587bc96084dbd33e02298a4a475dd40fc0\"" May 13 23:48:22.294069 containerd[1449]: time="2025-05-13T23:48:22.294041634Z" level=info msg="StartContainer for \"f74ea6b64a56ba232a39e61a230002587bc96084dbd33e02298a4a475dd40fc0\"" May 13 23:48:22.329238 systemd[1]: Started cri-containerd-f74ea6b64a56ba232a39e61a230002587bc96084dbd33e02298a4a475dd40fc0.scope - libcontainer container f74ea6b64a56ba232a39e61a230002587bc96084dbd33e02298a4a475dd40fc0. May 13 23:48:22.357900 containerd[1449]: time="2025-05-13T23:48:22.355371705Z" level=info msg="StartContainer for \"f74ea6b64a56ba232a39e61a230002587bc96084dbd33e02298a4a475dd40fc0\" returns successfully" May 13 23:48:23.105775 containerd[1449]: time="2025-05-13T23:48:23.105737003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x6rx8,Uid:4d0a926a-1c49-4191-ab71-1ae7fc248341,Namespace:kube-system,Attempt:0,}" May 13 23:48:23.150916 systemd-networkd[1367]: vethd1abff88: Link UP May 13 23:48:23.153487 kernel: cni0: port 2(vethd1abff88) entered blocking state May 13 23:48:23.153556 kernel: cni0: port 2(vethd1abff88) entered disabled state May 13 23:48:23.153576 kernel: vethd1abff88: entered allmulticast mode May 13 23:48:23.154129 kernel: vethd1abff88: entered promiscuous mode May 13 23:48:23.159137 kernel: cni0: port 2(vethd1abff88) entered blocking state May 13 23:48:23.159188 kernel: cni0: port 2(vethd1abff88) entered forwarding state May 13 23:48:23.158948 systemd-networkd[1367]: vethd1abff88: Gained carrier May 13 23:48:23.160412 containerd[1449]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000014938), "name":"cbr0", "type":"bridge"} May 13 23:48:23.160412 containerd[1449]: delegateAdd: netconf sent to delegate plugin: May 13 23:48:23.179877 containerd[1449]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-13T23:48:23.179318674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:48:23.179877 containerd[1449]: time="2025-05-13T23:48:23.179718874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:48:23.179877 containerd[1449]: time="2025-05-13T23:48:23.179730994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:48:23.179877 containerd[1449]: time="2025-05-13T23:48:23.179804674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:48:23.199151 systemd[1]: Started cri-containerd-349a3476aaab36432ec6dee77cddb16a314e84cd5e064f25e6a9b20cb7d60b38.scope - libcontainer container 349a3476aaab36432ec6dee77cddb16a314e84cd5e064f25e6a9b20cb7d60b38. May 13 23:48:23.228285 kubelet[2577]: I0513 23:48:23.225868 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-t89zd" podStartSLOduration=13.479007561 podStartE2EDuration="17.225849348s" podCreationTimestamp="2025-05-13 23:48:06 +0000 UTC" firstStartedPulling="2025-05-13 23:48:07.145374936 +0000 UTC m=+16.123707978" lastFinishedPulling="2025-05-13 23:48:10.892216763 +0000 UTC m=+19.870549765" observedRunningTime="2025-05-13 23:48:12.200230487 +0000 UTC m=+21.178563529" watchObservedRunningTime="2025-05-13 23:48:23.225849348 +0000 UTC m=+32.204182390" May 13 23:48:23.228249 systemd-resolved[1370]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:48:23.254421 containerd[1449]: time="2025-05-13T23:48:23.254371464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x6rx8,Uid:4d0a926a-1c49-4191-ab71-1ae7fc248341,Namespace:kube-system,Attempt:0,} returns sandbox id \"349a3476aaab36432ec6dee77cddb16a314e84cd5e064f25e6a9b20cb7d60b38\"" May 13 23:48:23.255074 kubelet[2577]: I0513 23:48:23.254239 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-47rc7" podStartSLOduration=17.254219384 podStartE2EDuration="17.254219384s" podCreationTimestamp="2025-05-13 23:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:48:23.228690587 +0000 UTC m=+32.207023629" watchObservedRunningTime="2025-05-13 23:48:23.254219384 +0000 UTC m=+32.232552386" May 13 23:48:23.260403 containerd[1449]: time="2025-05-13T23:48:23.260363783Z" level=info msg="CreateContainer within sandbox \"349a3476aaab36432ec6dee77cddb16a314e84cd5e064f25e6a9b20cb7d60b38\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:48:23.296512 containerd[1449]: time="2025-05-13T23:48:23.296460178Z" level=info msg="CreateContainer within sandbox \"349a3476aaab36432ec6dee77cddb16a314e84cd5e064f25e6a9b20cb7d60b38\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ddb91ae942960e1a1197ac1244f6f191efd83189a638285ee79baac97559241\"" May 13 23:48:23.298093 containerd[1449]: time="2025-05-13T23:48:23.297196458Z" level=info msg="StartContainer for \"3ddb91ae942960e1a1197ac1244f6f191efd83189a638285ee79baac97559241\"" May 13 23:48:23.329210 systemd[1]: Started cri-containerd-3ddb91ae942960e1a1197ac1244f6f191efd83189a638285ee79baac97559241.scope - libcontainer container 3ddb91ae942960e1a1197ac1244f6f191efd83189a638285ee79baac97559241. May 13 23:48:23.356014 containerd[1449]: time="2025-05-13T23:48:23.355887170Z" level=info msg="StartContainer for \"3ddb91ae942960e1a1197ac1244f6f191efd83189a638285ee79baac97559241\" returns successfully" May 13 23:48:23.615180 systemd-networkd[1367]: veth93684719: Gained IPv6LL May 13 23:48:23.743228 systemd-networkd[1367]: cni0: Gained IPv6LL May 13 23:48:24.255211 systemd-networkd[1367]: vethd1abff88: Gained IPv6LL May 13 23:48:24.261119 kubelet[2577]: I0513 23:48:24.260985 2577 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-x6rx8" podStartSLOduration=18.260965452 podStartE2EDuration="18.260965452s" podCreationTimestamp="2025-05-13 23:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:48:24.239137095 +0000 UTC m=+33.217470137" watchObservedRunningTime="2025-05-13 23:48:24.260965452 +0000 UTC m=+33.239298494" May 13 23:48:27.292635 systemd[1]: Started sshd@7-10.0.0.122:22-10.0.0.1:48042.service - OpenSSH per-connection server daemon (10.0.0.1:48042). May 13 23:48:27.352471 sshd[3551]: Accepted publickey for core from 10.0.0.1 port 48042 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:48:27.354603 sshd-session[3551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:27.362298 systemd-logind[1434]: New session 8 of user core. May 13 23:48:27.377219 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:48:27.515874 sshd[3553]: Connection closed by 10.0.0.1 port 48042 May 13 23:48:27.516928 sshd-session[3551]: pam_unix(sshd:session): session closed for user core May 13 23:48:27.529255 systemd[1]: sshd@7-10.0.0.122:22-10.0.0.1:48042.service: Deactivated successfully. May 13 23:48:27.532246 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:48:27.533889 systemd-logind[1434]: Session 8 logged out. Waiting for processes to exit. May 13 23:48:27.540553 systemd[1]: Started sshd@8-10.0.0.122:22-10.0.0.1:48054.service - OpenSSH per-connection server daemon (10.0.0.1:48054). May 13 23:48:27.542085 systemd-logind[1434]: Removed session 8. May 13 23:48:27.581087 sshd[3582]: Accepted publickey for core from 10.0.0.1 port 48054 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:48:27.582845 sshd-session[3582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:27.590845 systemd-logind[1434]: New session 9 of user core. May 13 23:48:27.603184 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:48:27.755036 sshd[3590]: Connection closed by 10.0.0.1 port 48054 May 13 23:48:27.754952 sshd-session[3582]: pam_unix(sshd:session): session closed for user core May 13 23:48:27.771766 systemd[1]: sshd@8-10.0.0.122:22-10.0.0.1:48054.service: Deactivated successfully. May 13 23:48:27.773792 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:48:27.775571 systemd-logind[1434]: Session 9 logged out. Waiting for processes to exit. May 13 23:48:27.785101 systemd[1]: Started sshd@9-10.0.0.122:22-10.0.0.1:48062.service - OpenSSH per-connection server daemon (10.0.0.1:48062). May 13 23:48:27.790276 systemd-logind[1434]: Removed session 9. May 13 23:48:27.831230 sshd[3601]: Accepted publickey for core from 10.0.0.1 port 48062 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:48:27.832983 sshd-session[3601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:27.838650 systemd-logind[1434]: New session 10 of user core. May 13 23:48:27.846527 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:48:27.964944 sshd[3604]: Connection closed by 10.0.0.1 port 48062 May 13 23:48:27.965320 sshd-session[3601]: pam_unix(sshd:session): session closed for user core May 13 23:48:27.969216 systemd[1]: sshd@9-10.0.0.122:22-10.0.0.1:48062.service: Deactivated successfully. May 13 23:48:27.971171 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:48:27.971966 systemd-logind[1434]: Session 10 logged out. Waiting for processes to exit. May 13 23:48:27.972906 systemd-logind[1434]: Removed session 10. May 13 23:48:32.996433 systemd[1]: Started sshd@10-10.0.0.122:22-10.0.0.1:46640.service - OpenSSH per-connection server daemon (10.0.0.1:46640). May 13 23:48:33.040076 sshd[3638]: Accepted publickey for core from 10.0.0.1 port 46640 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:48:33.040920 sshd-session[3638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:33.046339 systemd-logind[1434]: New session 11 of user core. May 13 23:48:33.061263 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:48:33.201924 sshd[3640]: Connection closed by 10.0.0.1 port 46640 May 13 23:48:33.204059 sshd-session[3638]: pam_unix(sshd:session): session closed for user core May 13 23:48:33.214817 systemd[1]: sshd@10-10.0.0.122:22-10.0.0.1:46640.service: Deactivated successfully. May 13 23:48:33.217377 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:48:33.218378 systemd-logind[1434]: Session 11 logged out. Waiting for processes to exit. May 13 23:48:33.225621 systemd[1]: Started sshd@11-10.0.0.122:22-10.0.0.1:46652.service - OpenSSH per-connection server daemon (10.0.0.1:46652). May 13 23:48:33.226740 systemd-logind[1434]: Removed session 11. May 13 23:48:33.267638 sshd[3652]: Accepted publickey for core from 10.0.0.1 port 46652 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:48:33.269253 sshd-session[3652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:33.274921 systemd-logind[1434]: New session 12 of user core. May 13 23:48:33.284216 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:48:33.584507 sshd[3655]: Connection closed by 10.0.0.1 port 46652 May 13 23:48:33.586345 sshd-session[3652]: pam_unix(sshd:session): session closed for user core May 13 23:48:33.598549 systemd[1]: sshd@11-10.0.0.122:22-10.0.0.1:46652.service: Deactivated successfully. May 13 23:48:33.601277 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:48:33.602177 systemd-logind[1434]: Session 12 logged out. Waiting for processes to exit. May 13 23:48:33.614571 systemd[1]: Started sshd@12-10.0.0.122:22-10.0.0.1:46658.service - OpenSSH per-connection server daemon (10.0.0.1:46658). May 13 23:48:33.615831 systemd-logind[1434]: Removed session 12. May 13 23:48:33.664585 sshd[3665]: Accepted publickey for core from 10.0.0.1 port 46658 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:48:33.666400 sshd-session[3665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:33.673312 systemd-logind[1434]: New session 13 of user core. May 13 23:48:33.680208 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:48:34.986137 sshd[3668]: Connection closed by 10.0.0.1 port 46658 May 13 23:48:34.987529 sshd-session[3665]: pam_unix(sshd:session): session closed for user core May 13 23:48:35.001365 systemd[1]: sshd@12-10.0.0.122:22-10.0.0.1:46658.service: Deactivated successfully. May 13 23:48:35.006933 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:48:35.007855 systemd-logind[1434]: Session 13 logged out. Waiting for processes to exit. May 13 23:48:35.016517 systemd[1]: Started sshd@13-10.0.0.122:22-10.0.0.1:46668.service - OpenSSH per-connection server daemon (10.0.0.1:46668). May 13 23:48:35.018509 systemd-logind[1434]: Removed session 13. May 13 23:48:35.058400 sshd[3686]: Accepted publickey for core from 10.0.0.1 port 46668 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:48:35.059947 sshd-session[3686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:35.064413 systemd-logind[1434]: New session 14 of user core. May 13 23:48:35.073175 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:48:35.292047 sshd[3689]: Connection closed by 10.0.0.1 port 46668 May 13 23:48:35.292670 sshd-session[3686]: pam_unix(sshd:session): session closed for user core May 13 23:48:35.304539 systemd[1]: sshd@13-10.0.0.122:22-10.0.0.1:46668.service: Deactivated successfully. May 13 23:48:35.306379 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:48:35.307354 systemd-logind[1434]: Session 14 logged out. Waiting for processes to exit. May 13 23:48:35.312453 systemd[1]: Started sshd@14-10.0.0.122:22-10.0.0.1:46676.service - OpenSSH per-connection server daemon (10.0.0.1:46676). May 13 23:48:35.314477 systemd-logind[1434]: Removed session 14. May 13 23:48:35.358103 sshd[3699]: Accepted publickey for core from 10.0.0.1 port 46676 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:48:35.359500 sshd-session[3699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:35.366173 systemd-logind[1434]: New session 15 of user core. May 13 23:48:35.378260 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 23:48:35.494431 sshd[3702]: Connection closed by 10.0.0.1 port 46676 May 13 23:48:35.494788 sshd-session[3699]: pam_unix(sshd:session): session closed for user core May 13 23:48:35.497891 systemd[1]: sshd@14-10.0.0.122:22-10.0.0.1:46676.service: Deactivated successfully. May 13 23:48:35.500114 systemd[1]: session-15.scope: Deactivated successfully. May 13 23:48:35.501215 systemd-logind[1434]: Session 15 logged out. Waiting for processes to exit. May 13 23:48:35.502058 systemd-logind[1434]: Removed session 15. May 13 23:48:40.513805 systemd[1]: Started sshd@15-10.0.0.122:22-10.0.0.1:46682.service - OpenSSH per-connection server daemon (10.0.0.1:46682). May 13 23:48:40.560355 sshd[3742]: Accepted publickey for core from 10.0.0.1 port 46682 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:48:40.562486 sshd-session[3742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:40.572275 systemd-logind[1434]: New session 16 of user core. May 13 23:48:40.581211 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 23:48:40.696335 sshd[3744]: Connection closed by 10.0.0.1 port 46682 May 13 23:48:40.696821 sshd-session[3742]: pam_unix(sshd:session): session closed for user core May 13 23:48:40.700556 systemd[1]: sshd@15-10.0.0.122:22-10.0.0.1:46682.service: Deactivated successfully. May 13 23:48:40.702353 systemd[1]: session-16.scope: Deactivated successfully. May 13 23:48:40.703967 systemd-logind[1434]: Session 16 logged out. Waiting for processes to exit. May 13 23:48:40.705212 systemd-logind[1434]: Removed session 16. May 13 23:48:45.709586 systemd[1]: Started sshd@16-10.0.0.122:22-10.0.0.1:32980.service - OpenSSH per-connection server daemon (10.0.0.1:32980). May 13 23:48:45.753234 sshd[3778]: Accepted publickey for core from 10.0.0.1 port 32980 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:48:45.754903 sshd-session[3778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:45.759508 systemd-logind[1434]: New session 17 of user core. May 13 23:48:45.769201 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 23:48:45.884796 sshd[3780]: Connection closed by 10.0.0.1 port 32980 May 13 23:48:45.885212 sshd-session[3778]: pam_unix(sshd:session): session closed for user core May 13 23:48:45.888434 systemd[1]: sshd@16-10.0.0.122:22-10.0.0.1:32980.service: Deactivated successfully. May 13 23:48:45.890177 systemd[1]: session-17.scope: Deactivated successfully. May 13 23:48:45.890857 systemd-logind[1434]: Session 17 logged out. Waiting for processes to exit. May 13 23:48:45.891788 systemd-logind[1434]: Removed session 17. May 13 23:48:50.900259 systemd[1]: Started sshd@17-10.0.0.122:22-10.0.0.1:32988.service - OpenSSH per-connection server daemon (10.0.0.1:32988). May 13 23:48:50.947297 sshd[3815]: Accepted publickey for core from 10.0.0.1 port 32988 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:48:50.948543 sshd-session[3815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:50.953323 systemd-logind[1434]: New session 18 of user core. May 13 23:48:50.966199 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 23:48:51.081477 sshd[3817]: Connection closed by 10.0.0.1 port 32988 May 13 23:48:51.082046 sshd-session[3815]: pam_unix(sshd:session): session closed for user core May 13 23:48:51.086603 systemd[1]: sshd@17-10.0.0.122:22-10.0.0.1:32988.service: Deactivated successfully. May 13 23:48:51.089488 systemd[1]: session-18.scope: Deactivated successfully. May 13 23:48:51.090286 systemd-logind[1434]: Session 18 logged out. Waiting for processes to exit. May 13 23:48:51.091376 systemd-logind[1434]: Removed session 18.