Apr 30 00:01:01.945500 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 00:01:01.945522 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Tue Apr 29 22:24:03 -00 2025 Apr 30 00:01:01.945532 kernel: KASLR enabled Apr 30 00:01:01.945538 kernel: efi: EFI v2.7 by EDK II Apr 30 00:01:01.945544 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Apr 30 00:01:01.945549 kernel: random: crng init done Apr 30 00:01:01.945556 kernel: secureboot: Secure boot disabled Apr 30 00:01:01.945562 kernel: ACPI: Early table checksum verification disabled Apr 30 00:01:01.945569 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Apr 30 00:01:01.945576 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Apr 30 00:01:01.945582 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:01:01.945589 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:01:01.945595 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:01:01.945601 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:01:01.945608 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:01:01.945616 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:01:01.945622 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:01:01.945628 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:01:01.945635 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:01:01.945641 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Apr 30 00:01:01.945647 kernel: NUMA: Failed to initialise from firmware Apr 30 00:01:01.945654 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Apr 30 00:01:01.945660 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Apr 30 00:01:01.945666 kernel: Zone ranges: Apr 30 00:01:01.945672 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Apr 30 00:01:01.945680 kernel: DMA32 empty Apr 30 00:01:01.945686 kernel: Normal empty Apr 30 00:01:01.945692 kernel: Movable zone start for each node Apr 30 00:01:01.945699 kernel: Early memory node ranges Apr 30 00:01:01.945705 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Apr 30 00:01:01.945711 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Apr 30 00:01:01.945718 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Apr 30 00:01:01.945724 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Apr 30 00:01:01.945730 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Apr 30 00:01:01.945736 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Apr 30 00:01:01.945743 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Apr 30 00:01:01.945749 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Apr 30 00:01:01.945757 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Apr 30 00:01:01.945763 kernel: psci: probing for conduit method from ACPI. Apr 30 00:01:01.945770 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 00:01:01.945779 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:01:01.945785 kernel: psci: Trusted OS migration not required Apr 30 00:01:01.945792 kernel: psci: SMC Calling Convention v1.1 Apr 30 00:01:01.945800 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 30 00:01:01.945807 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:01:01.945813 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:01:01.945821 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Apr 30 00:01:01.945827 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:01:01.945834 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:01:01.945841 kernel: CPU features: detected: Hardware dirty bit management Apr 30 00:01:01.945847 kernel: CPU features: detected: Spectre-v4 Apr 30 00:01:01.945854 kernel: CPU features: detected: Spectre-BHB Apr 30 00:01:01.945861 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 00:01:01.945869 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 00:01:01.945875 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 00:01:01.945882 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 00:01:01.945890 kernel: alternatives: applying boot alternatives Apr 30 00:01:01.945902 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6e9bced8073e517a5f5178e5412663c3084f53d67852b3dfe0380ce71e6d0edd Apr 30 00:01:01.945911 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:01:01.945918 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:01:01.945925 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:01:01.945932 kernel: Fallback order for Node 0: 0 Apr 30 00:01:01.945938 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Apr 30 00:01:01.945945 kernel: Policy zone: DMA Apr 30 00:01:01.945953 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:01:01.945959 kernel: software IO TLB: area num 4. Apr 30 00:01:01.945966 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Apr 30 00:01:01.945973 kernel: Memory: 2386196K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 186092K reserved, 0K cma-reserved) Apr 30 00:01:01.945980 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 30 00:01:01.945987 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:01:01.945994 kernel: rcu: RCU event tracing is enabled. Apr 30 00:01:01.946001 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 30 00:01:01.946008 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:01:01.946015 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:01:01.946022 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:01:01.946028 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 30 00:01:01.946037 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:01:01.946044 kernel: GICv3: 256 SPIs implemented Apr 30 00:01:01.946050 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:01:01.946057 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:01:01.946064 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 00:01:01.946070 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 30 00:01:01.946077 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 30 00:01:01.946093 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 00:01:01.946100 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Apr 30 00:01:01.946106 kernel: GICv3: using LPI property table @0x00000000400f0000 Apr 30 00:01:01.946113 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Apr 30 00:01:01.946122 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:01:01.946129 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:01:01.946136 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 00:01:01.946142 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 00:01:01.946149 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 00:01:01.946156 kernel: arm-pv: using stolen time PV Apr 30 00:01:01.946163 kernel: Console: colour dummy device 80x25 Apr 30 00:01:01.946170 kernel: ACPI: Core revision 20230628 Apr 30 00:01:01.946177 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 00:01:01.946183 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:01:01.946191 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:01:01.946198 kernel: landlock: Up and running. Apr 30 00:01:01.946205 kernel: SELinux: Initializing. Apr 30 00:01:01.946212 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:01:01.946219 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:01:01.946226 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 00:01:01.946233 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 00:01:01.946240 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:01:01.946247 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:01:01.946256 kernel: Platform MSI: ITS@0x8080000 domain created Apr 30 00:01:01.946263 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 30 00:01:01.946294 kernel: Remapping and enabling EFI services. Apr 30 00:01:01.946302 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:01:01.946309 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:01:01.946316 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 30 00:01:01.946323 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Apr 30 00:01:01.946330 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:01:01.946337 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 00:01:01.946344 kernel: Detected PIPT I-cache on CPU2 Apr 30 00:01:01.946353 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Apr 30 00:01:01.946360 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Apr 30 00:01:01.946372 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:01:01.946381 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Apr 30 00:01:01.946389 kernel: Detected PIPT I-cache on CPU3 Apr 30 00:01:01.946397 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Apr 30 00:01:01.946404 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Apr 30 00:01:01.946411 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:01:01.946419 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Apr 30 00:01:01.946427 kernel: smp: Brought up 1 node, 4 CPUs Apr 30 00:01:01.946435 kernel: SMP: Total of 4 processors activated. Apr 30 00:01:01.946442 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:01:01.946449 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 00:01:01.946457 kernel: CPU features: detected: Common not Private translations Apr 30 00:01:01.946464 kernel: CPU features: detected: CRC32 instructions Apr 30 00:01:01.946471 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 30 00:01:01.946478 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 00:01:01.946487 kernel: CPU features: detected: LSE atomic instructions Apr 30 00:01:01.946494 kernel: CPU features: detected: Privileged Access Never Apr 30 00:01:01.946501 kernel: CPU features: detected: RAS Extension Support Apr 30 00:01:01.946508 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 30 00:01:01.946516 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:01:01.946523 kernel: alternatives: applying system-wide alternatives Apr 30 00:01:01.946530 kernel: devtmpfs: initialized Apr 30 00:01:01.946537 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:01:01.946545 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 30 00:01:01.946554 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:01:01.946561 kernel: SMBIOS 3.0.0 present. Apr 30 00:01:01.946569 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Apr 30 00:01:01.946576 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:01:01.946584 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:01:01.946591 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:01:01.946599 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:01:01.946606 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:01:01.946613 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Apr 30 00:01:01.946622 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:01:01.946629 kernel: cpuidle: using governor menu Apr 30 00:01:01.946637 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:01:01.946644 kernel: ASID allocator initialised with 32768 entries Apr 30 00:01:01.946651 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:01:01.946659 kernel: Serial: AMBA PL011 UART driver Apr 30 00:01:01.946666 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 00:01:01.946673 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 00:01:01.946680 kernel: Modules: 508928 pages in range for PLT usage Apr 30 00:01:01.946689 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:01:01.946696 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:01:01.946704 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:01:01.946711 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:01:01.946719 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:01:01.946726 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:01:01.946733 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:01:01.946740 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:01:01.946748 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:01:01.946756 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:01:01.946763 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:01:01.946771 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:01:01.946778 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:01:01.946785 kernel: ACPI: Interpreter enabled Apr 30 00:01:01.946793 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:01:01.946800 kernel: ACPI: MCFG table detected, 1 entries Apr 30 00:01:01.946808 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 30 00:01:01.946815 kernel: printk: console [ttyAMA0] enabled Apr 30 00:01:01.946824 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 00:01:01.946957 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:01:01.947034 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 00:01:01.947109 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 00:01:01.947180 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 30 00:01:01.947246 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 30 00:01:01.947255 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 30 00:01:01.947265 kernel: PCI host bridge to bus 0000:00 Apr 30 00:01:01.947364 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 30 00:01:01.947426 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 00:01:01.947487 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 30 00:01:01.947546 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 00:01:01.947627 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 30 00:01:01.947712 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Apr 30 00:01:01.947787 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Apr 30 00:01:01.947856 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Apr 30 00:01:01.947925 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:01:01.947995 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:01:01.948063 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Apr 30 00:01:01.948144 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Apr 30 00:01:01.948209 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 30 00:01:01.948290 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 00:01:01.948353 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 30 00:01:01.948363 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 00:01:01.948371 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 00:01:01.948378 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 00:01:01.948385 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 00:01:01.948392 kernel: iommu: Default domain type: Translated Apr 30 00:01:01.948400 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:01:01.948410 kernel: efivars: Registered efivars operations Apr 30 00:01:01.948417 kernel: vgaarb: loaded Apr 30 00:01:01.948425 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:01:01.948432 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:01:01.948439 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:01:01.948447 kernel: pnp: PnP ACPI init Apr 30 00:01:01.948519 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 30 00:01:01.948529 kernel: pnp: PnP ACPI: found 1 devices Apr 30 00:01:01.948539 kernel: NET: Registered PF_INET protocol family Apr 30 00:01:01.948546 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:01:01.948554 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:01:01.948561 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:01:01.948569 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:01:01.948576 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:01:01.948584 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:01:01.948591 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:01:01.948599 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:01:01.948607 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:01:01.948615 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:01:01.948622 kernel: kvm [1]: HYP mode not available Apr 30 00:01:01.948629 kernel: Initialise system trusted keyrings Apr 30 00:01:01.948637 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:01:01.948644 kernel: Key type asymmetric registered Apr 30 00:01:01.948651 kernel: Asymmetric key parser 'x509' registered Apr 30 00:01:01.948658 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:01:01.948665 kernel: io scheduler mq-deadline registered Apr 30 00:01:01.948676 kernel: io scheduler kyber registered Apr 30 00:01:01.948683 kernel: io scheduler bfq registered Apr 30 00:01:01.948691 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 00:01:01.948698 kernel: ACPI: button: Power Button [PWRB] Apr 30 00:01:01.948706 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 00:01:01.948776 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Apr 30 00:01:01.948785 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:01:01.948793 kernel: thunder_xcv, ver 1.0 Apr 30 00:01:01.948800 kernel: thunder_bgx, ver 1.0 Apr 30 00:01:01.948809 kernel: nicpf, ver 1.0 Apr 30 00:01:01.948820 kernel: nicvf, ver 1.0 Apr 30 00:01:01.948898 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:01:01.948965 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:01:01 UTC (1745971261) Apr 30 00:01:01.948975 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:01:01.948983 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 30 00:01:01.948991 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:01:01.949003 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:01:01.949015 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:01:01.949022 kernel: Segment Routing with IPv6 Apr 30 00:01:01.949030 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:01:01.949038 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:01:01.949045 kernel: Key type dns_resolver registered Apr 30 00:01:01.949052 kernel: registered taskstats version 1 Apr 30 00:01:01.949060 kernel: Loading compiled-in X.509 certificates Apr 30 00:01:01.949068 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: bbef389676bd9584646af24e9e264c7789f8630f' Apr 30 00:01:01.949075 kernel: Key type .fscrypt registered Apr 30 00:01:01.949090 kernel: Key type fscrypt-provisioning registered Apr 30 00:01:01.949098 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:01:01.949106 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:01:01.949113 kernel: ima: No architecture policies found Apr 30 00:01:01.949121 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:01:01.949128 kernel: clk: Disabling unused clocks Apr 30 00:01:01.949136 kernel: Freeing unused kernel memory: 39744K Apr 30 00:01:01.949143 kernel: Run /init as init process Apr 30 00:01:01.949151 kernel: with arguments: Apr 30 00:01:01.949159 kernel: /init Apr 30 00:01:01.949166 kernel: with environment: Apr 30 00:01:01.949173 kernel: HOME=/ Apr 30 00:01:01.949181 kernel: TERM=linux Apr 30 00:01:01.949188 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:01:01.949197 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:01:01.949207 systemd[1]: Detected virtualization kvm. Apr 30 00:01:01.949215 systemd[1]: Detected architecture arm64. Apr 30 00:01:01.949224 systemd[1]: Running in initrd. Apr 30 00:01:01.949232 systemd[1]: No hostname configured, using default hostname. Apr 30 00:01:01.949240 systemd[1]: Hostname set to . Apr 30 00:01:01.949248 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:01:01.949255 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:01:01.949263 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:01:01.949281 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:01:01.949290 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:01:01.949301 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:01:01.949309 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:01:01.949317 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:01:01.949326 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:01:01.949334 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:01:01.949342 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:01:01.949352 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:01:01.949374 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:01:01.949383 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:01:01.949390 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:01:01.949398 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:01:01.949406 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:01:01.949413 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:01:01.949421 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:01:01.949429 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:01:01.949439 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:01:01.949446 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:01:01.949454 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:01:01.949462 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:01:01.949470 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:01:01.949478 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:01:01.949485 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:01:01.949493 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:01:01.949501 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:01:01.949510 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:01:01.949517 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:01:01.949525 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:01:01.949533 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:01:01.949541 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:01:01.949551 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:01:01.949578 systemd-journald[239]: Collecting audit messages is disabled. Apr 30 00:01:01.949597 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:01:01.949608 systemd-journald[239]: Journal started Apr 30 00:01:01.949627 systemd-journald[239]: Runtime Journal (/run/log/journal/8fed4cb9e7bb4f1898e5fd72d559808e) is 5.9M, max 47.3M, 41.4M free. Apr 30 00:01:01.940127 systemd-modules-load[240]: Inserted module 'overlay' Apr 30 00:01:01.953537 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:01:01.955755 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:01:01.964290 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:01:01.965929 systemd-modules-load[240]: Inserted module 'br_netfilter' Apr 30 00:01:01.967017 kernel: Bridge firewalling registered Apr 30 00:01:01.968557 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:01:01.970746 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:01:01.974612 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:01:01.976040 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:01:01.982964 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:01:01.984282 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:01:01.987635 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:01:01.999320 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:01:02.000953 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:01:02.016508 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:01:02.019019 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:01:02.032653 dracut-cmdline[279]: dracut-dracut-053 Apr 30 00:01:02.035568 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6e9bced8073e517a5f5178e5412663c3084f53d67852b3dfe0380ce71e6d0edd Apr 30 00:01:02.051785 systemd-resolved[281]: Positive Trust Anchors: Apr 30 00:01:02.051856 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:01:02.051888 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:01:02.056705 systemd-resolved[281]: Defaulting to hostname 'linux'. Apr 30 00:01:02.057683 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:01:02.061854 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:01:02.105312 kernel: SCSI subsystem initialized Apr 30 00:01:02.110285 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:01:02.118300 kernel: iscsi: registered transport (tcp) Apr 30 00:01:02.131339 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:01:02.131358 kernel: QLogic iSCSI HBA Driver Apr 30 00:01:02.176267 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:01:02.187478 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:01:02.204300 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:01:02.204363 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:01:02.205960 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:01:02.254307 kernel: raid6: neonx8 gen() 15600 MB/s Apr 30 00:01:02.271295 kernel: raid6: neonx4 gen() 15574 MB/s Apr 30 00:01:02.288291 kernel: raid6: neonx2 gen() 13098 MB/s Apr 30 00:01:02.305298 kernel: raid6: neonx1 gen() 10457 MB/s Apr 30 00:01:02.322320 kernel: raid6: int64x8 gen() 6846 MB/s Apr 30 00:01:02.339303 kernel: raid6: int64x4 gen() 7100 MB/s Apr 30 00:01:02.356291 kernel: raid6: int64x2 gen() 6106 MB/s Apr 30 00:01:02.373607 kernel: raid6: int64x1 gen() 5027 MB/s Apr 30 00:01:02.373649 kernel: raid6: using algorithm neonx8 gen() 15600 MB/s Apr 30 00:01:02.391524 kernel: raid6: .... xor() 11864 MB/s, rmw enabled Apr 30 00:01:02.391587 kernel: raid6: using neon recovery algorithm Apr 30 00:01:02.397722 kernel: xor: measuring software checksum speed Apr 30 00:01:02.397772 kernel: 8regs : 15683 MB/sec Apr 30 00:01:02.398409 kernel: 32regs : 19636 MB/sec Apr 30 00:01:02.399774 kernel: arm64_neon : 26910 MB/sec Apr 30 00:01:02.399815 kernel: xor: using function: arm64_neon (26910 MB/sec) Apr 30 00:01:02.454335 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:01:02.467323 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:01:02.475434 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:01:02.487593 systemd-udevd[464]: Using default interface naming scheme 'v255'. Apr 30 00:01:02.491011 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:01:02.504481 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:01:02.516753 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Apr 30 00:01:02.546349 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:01:02.559445 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:01:02.599778 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:01:02.610820 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:01:02.623635 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:01:02.625600 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:01:02.627562 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:01:02.629679 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:01:02.641520 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:01:02.654118 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Apr 30 00:01:02.665312 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 30 00:01:02.665425 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:01:02.665436 kernel: GPT:9289727 != 19775487 Apr 30 00:01:02.665446 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:01:02.665455 kernel: GPT:9289727 != 19775487 Apr 30 00:01:02.665463 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:01:02.665479 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:01:02.653486 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:01:02.669654 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:01:02.669771 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:01:02.673162 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:01:02.674317 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:01:02.674839 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:01:02.677405 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:01:02.688741 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:01:02.697411 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (515) Apr 30 00:01:02.697437 kernel: BTRFS: device fsid 9647859b-527c-478f-8aa1-9dfa3fa871e3 devid 1 transid 43 /dev/vda3 scanned by (udev-worker) (527) Apr 30 00:01:02.702309 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:01:02.712554 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 00:01:02.720990 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 00:01:02.726242 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:01:02.730618 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 00:01:02.732141 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 00:01:02.745433 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:01:02.750484 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:01:02.753364 disk-uuid[554]: Primary Header is updated. Apr 30 00:01:02.753364 disk-uuid[554]: Secondary Entries is updated. Apr 30 00:01:02.753364 disk-uuid[554]: Secondary Header is updated. Apr 30 00:01:02.758290 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:01:02.773620 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:01:03.768901 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:01:03.768982 disk-uuid[555]: The operation has completed successfully. Apr 30 00:01:03.788954 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:01:03.789050 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:01:03.814457 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:01:03.817254 sh[575]: Success Apr 30 00:01:03.827332 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:01:03.856187 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:01:03.870616 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:01:03.873033 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:01:03.882644 kernel: BTRFS info (device dm-0): first mount of filesystem 9647859b-527c-478f-8aa1-9dfa3fa871e3 Apr 30 00:01:03.882687 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:01:03.882698 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:01:03.883731 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:01:03.885287 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:01:03.888772 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:01:03.890205 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:01:03.891016 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:01:03.896342 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:01:03.906111 kernel: BTRFS info (device vda6): first mount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 30 00:01:03.906165 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:01:03.906184 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:01:03.909289 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:01:03.917443 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:01:03.919440 kernel: BTRFS info (device vda6): last unmount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 30 00:01:03.926594 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:01:03.932433 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:01:03.997028 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:01:04.009456 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:01:04.031317 ignition[671]: Ignition 2.20.0 Apr 30 00:01:04.031327 ignition[671]: Stage: fetch-offline Apr 30 00:01:04.031370 ignition[671]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:01:04.031378 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:01:04.031565 ignition[671]: parsed url from cmdline: "" Apr 30 00:01:04.031568 ignition[671]: no config URL provided Apr 30 00:01:04.031572 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:01:04.031580 ignition[671]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:01:04.036757 systemd-networkd[766]: lo: Link UP Apr 30 00:01:04.031606 ignition[671]: op(1): [started] loading QEMU firmware config module Apr 30 00:01:04.036761 systemd-networkd[766]: lo: Gained carrier Apr 30 00:01:04.031610 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 30 00:01:04.037492 systemd-networkd[766]: Enumeration completed Apr 30 00:01:04.037604 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:01:04.037926 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:01:04.045697 ignition[671]: op(1): [finished] loading QEMU firmware config module Apr 30 00:01:04.037929 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:01:04.045719 ignition[671]: QEMU firmware config was not found. Ignoring... Apr 30 00:01:04.039527 systemd-networkd[766]: eth0: Link UP Apr 30 00:01:04.039530 systemd-networkd[766]: eth0: Gained carrier Apr 30 00:01:04.039537 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:01:04.039888 systemd[1]: Reached target network.target - Network. Apr 30 00:01:04.074329 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:01:04.075026 ignition[671]: parsing config with SHA512: dcc36eeb9ff9482be5265037e3fbf7b5a75fb5fac2256f94c96bd929f81a309220c3c1ab329a6b1e464d30f669450b30e5b647ea8c6c1ffe3e86743a21982b8c Apr 30 00:01:04.082838 unknown[671]: fetched base config from "system" Apr 30 00:01:04.082848 unknown[671]: fetched user config from "qemu" Apr 30 00:01:04.083471 ignition[671]: fetch-offline: fetch-offline passed Apr 30 00:01:04.085378 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:01:04.083554 ignition[671]: Ignition finished successfully Apr 30 00:01:04.087331 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 00:01:04.096434 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:01:04.106640 ignition[773]: Ignition 2.20.0 Apr 30 00:01:04.106651 ignition[773]: Stage: kargs Apr 30 00:01:04.106820 ignition[773]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:01:04.106830 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:01:04.110691 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:01:04.107757 ignition[773]: kargs: kargs passed Apr 30 00:01:04.107803 ignition[773]: Ignition finished successfully Apr 30 00:01:04.119428 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:01:04.129129 ignition[782]: Ignition 2.20.0 Apr 30 00:01:04.129141 ignition[782]: Stage: disks Apr 30 00:01:04.129353 ignition[782]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:01:04.129363 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:01:04.132022 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:01:04.130363 ignition[782]: disks: disks passed Apr 30 00:01:04.133457 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:01:04.130415 ignition[782]: Ignition finished successfully Apr 30 00:01:04.135396 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:01:04.137426 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:01:04.138929 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:01:04.140947 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:01:04.151453 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:01:04.162604 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 00:01:04.166983 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:01:04.184398 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:01:04.229296 kernel: EXT4-fs (vda9): mounted filesystem cd2ccabc-5b27-4350-bc86-21c9a8411827 r/w with ordered data mode. Quota mode: none. Apr 30 00:01:04.229862 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:01:04.231197 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:01:04.249408 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:01:04.251266 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:01:04.253691 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 00:01:04.253743 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:01:04.253767 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:01:04.261610 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (801) Apr 30 00:01:04.258205 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:01:04.262487 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:01:04.268101 kernel: BTRFS info (device vda6): first mount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 30 00:01:04.268124 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:01:04.268134 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:01:04.268143 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:01:04.270104 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:01:04.318350 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:01:04.321594 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:01:04.325082 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:01:04.329221 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:01:04.413633 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:01:04.426866 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:01:04.429824 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:01:04.435284 kernel: BTRFS info (device vda6): last unmount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 30 00:01:04.454703 ignition[914]: INFO : Ignition 2.20.0 Apr 30 00:01:04.454703 ignition[914]: INFO : Stage: mount Apr 30 00:01:04.456542 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:01:04.456542 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:01:04.456542 ignition[914]: INFO : mount: mount passed Apr 30 00:01:04.456542 ignition[914]: INFO : Ignition finished successfully Apr 30 00:01:04.458605 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:01:04.464359 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:01:04.465495 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:01:04.881406 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:01:04.898493 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:01:04.908046 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (928) Apr 30 00:01:04.908087 kernel: BTRFS info (device vda6): first mount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 30 00:01:04.908099 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:01:04.909008 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:01:04.914293 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:01:04.915400 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:01:04.949492 ignition[945]: INFO : Ignition 2.20.0 Apr 30 00:01:04.949492 ignition[945]: INFO : Stage: files Apr 30 00:01:04.951244 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:01:04.951244 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:01:04.951244 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:01:04.956171 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:01:04.956171 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:01:04.960102 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:01:04.961502 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:01:04.962767 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:01:04.961915 unknown[945]: wrote ssh authorized keys file for user: core Apr 30 00:01:04.965096 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:01:04.965096 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:01:04.968539 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:01:04.968539 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 30 00:01:05.069048 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 00:01:05.276214 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:01:05.276214 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:01:05.280110 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:01:05.280110 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:01:05.280110 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:01:05.280110 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:01:05.280110 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:01:05.280110 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:01:05.280110 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:01:05.280110 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:01:05.280110 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:01:05.280110 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:01:05.280110 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:01:05.280110 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:01:05.280110 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Apr 30 00:01:05.569157 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 00:01:05.904828 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:01:05.904828 ignition[945]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 30 00:01:05.908723 ignition[945]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:01:05.908723 ignition[945]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:01:05.908723 ignition[945]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 30 00:01:05.908723 ignition[945]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 30 00:01:05.908723 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:01:05.908723 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:01:05.908723 ignition[945]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 30 00:01:05.908723 ignition[945]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 30 00:01:05.908723 ignition[945]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 00:01:05.908723 ignition[945]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 00:01:05.908723 ignition[945]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 30 00:01:05.908723 ignition[945]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Apr 30 00:01:05.936309 ignition[945]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 00:01:05.940486 ignition[945]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 00:01:05.942157 ignition[945]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Apr 30 00:01:05.942157 ignition[945]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:01:05.946083 ignition[945]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:01:05.946083 ignition[945]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:01:05.946083 ignition[945]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:01:05.946083 ignition[945]: INFO : files: files passed Apr 30 00:01:05.946083 ignition[945]: INFO : Ignition finished successfully Apr 30 00:01:05.944912 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:01:05.962479 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:01:05.968559 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:01:05.970042 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:01:05.970139 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:01:05.976394 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Apr 30 00:01:05.980011 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:01:05.980011 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:01:05.983161 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:01:05.983091 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:01:05.984704 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:01:05.993508 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:01:06.012501 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:01:06.013338 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:01:06.014843 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:01:06.016751 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:01:06.018624 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:01:06.030802 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:01:06.048021 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:01:06.050482 systemd-networkd[766]: eth0: Gained IPv6LL Apr 30 00:01:06.060495 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:01:06.069022 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:01:06.071391 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:01:06.074045 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:01:06.076081 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:01:06.076254 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:01:06.079098 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:01:06.081310 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:01:06.083144 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:01:06.085039 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:01:06.087079 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:01:06.089441 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:01:06.091282 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:01:06.093416 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:01:06.095549 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:01:06.097453 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:01:06.099382 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:01:06.099561 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:01:06.102687 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:01:06.104822 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:01:06.107616 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:01:06.111403 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:01:06.114097 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:01:06.114294 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:01:06.117522 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:01:06.117688 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:01:06.120692 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:01:06.123187 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:01:06.128411 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:01:06.131564 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:01:06.134757 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:01:06.136388 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:01:06.136516 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:01:06.138854 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:01:06.139138 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:01:06.141117 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:01:06.141454 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:01:06.143884 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:01:06.144177 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:01:06.154584 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:01:06.155582 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:01:06.155772 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:01:06.162804 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:01:06.165310 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:01:06.165701 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:01:06.169218 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:01:06.169392 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:01:06.173163 ignition[999]: INFO : Ignition 2.20.0 Apr 30 00:01:06.173163 ignition[999]: INFO : Stage: umount Apr 30 00:01:06.173163 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:01:06.173163 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:01:06.173163 ignition[999]: INFO : umount: umount passed Apr 30 00:01:06.173163 ignition[999]: INFO : Ignition finished successfully Apr 30 00:01:06.176222 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:01:06.176449 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:01:06.179931 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:01:06.182353 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:01:06.182440 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:01:06.185301 systemd[1]: Stopped target network.target - Network. Apr 30 00:01:06.186588 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:01:06.186673 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:01:06.189488 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:01:06.189596 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:01:06.191851 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:01:06.191906 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:01:06.194343 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:01:06.194392 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:01:06.196398 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:01:06.198910 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:01:06.203899 systemd-networkd[766]: eth0: DHCPv6 lease lost Apr 30 00:01:06.205762 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:01:06.205895 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:01:06.208483 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:01:06.208650 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:01:06.211507 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:01:06.211565 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:01:06.223443 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:01:06.224712 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:01:06.224787 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:01:06.227588 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:01:06.227644 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:01:06.229597 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:01:06.229699 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:01:06.232491 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:01:06.232545 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:01:06.235409 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:01:06.237160 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:01:06.237320 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:01:06.241214 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:01:06.241267 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:01:06.249043 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:01:06.249201 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:01:06.258321 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:01:06.258462 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:01:06.260908 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:01:06.260945 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:01:06.263053 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:01:06.263095 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:01:06.265262 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:01:06.265326 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:01:06.268423 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:01:06.268491 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:01:06.271402 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:01:06.271449 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:01:06.284488 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:01:06.285697 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:01:06.285813 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:01:06.288209 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:01:06.288260 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:01:06.290606 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:01:06.291329 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:01:06.293400 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:01:06.296047 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:01:06.307020 systemd[1]: Switching root. Apr 30 00:01:06.341911 systemd-journald[239]: Journal stopped Apr 30 00:01:07.154964 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Apr 30 00:01:07.155051 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:01:07.155075 kernel: SELinux: policy capability open_perms=1 Apr 30 00:01:07.155094 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:01:07.155104 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:01:07.155113 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:01:07.155123 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:01:07.155132 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:01:07.155141 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:01:07.155151 kernel: audit: type=1403 audit(1745971266.555:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:01:07.155162 systemd[1]: Successfully loaded SELinux policy in 40.954ms. Apr 30 00:01:07.155182 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.733ms. Apr 30 00:01:07.155195 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:01:07.155207 systemd[1]: Detected virtualization kvm. Apr 30 00:01:07.155217 systemd[1]: Detected architecture arm64. Apr 30 00:01:07.155227 systemd[1]: Detected first boot. Apr 30 00:01:07.155240 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:01:07.155250 zram_generator::config[1060]: No configuration found. Apr 30 00:01:07.155261 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:01:07.155282 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:01:07.155295 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 00:01:07.155306 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:01:07.155316 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:01:07.155327 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:01:07.155337 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:01:07.155348 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:01:07.155359 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:01:07.155369 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:01:07.155382 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:01:07.155392 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:01:07.155406 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:01:07.155417 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:01:07.155428 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:01:07.155439 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:01:07.155449 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:01:07.155460 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 30 00:01:07.155470 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:01:07.155483 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:01:07.155494 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:01:07.155504 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:01:07.155515 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:01:07.155526 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:01:07.155536 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:01:07.155547 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:01:07.155558 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:01:07.155570 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:01:07.155581 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:01:07.155592 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:01:07.155602 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:01:07.155613 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:01:07.155623 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:01:07.155634 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:01:07.155645 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:01:07.155655 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:01:07.155666 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:01:07.155678 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:01:07.155689 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:01:07.155700 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:01:07.155713 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:01:07.155748 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:01:07.155762 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:01:07.155773 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:01:07.155784 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:01:07.155797 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:01:07.155807 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:01:07.155818 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:01:07.155829 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 30 00:01:07.155840 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 30 00:01:07.155850 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:01:07.155861 kernel: fuse: init (API version 7.39) Apr 30 00:01:07.155871 kernel: loop: module loaded Apr 30 00:01:07.155881 kernel: ACPI: bus type drm_connector registered Apr 30 00:01:07.155892 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:01:07.155903 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:01:07.155913 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:01:07.155925 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:01:07.155937 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:01:07.155948 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:01:07.155958 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:01:07.155970 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:01:07.155982 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:01:07.155993 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:01:07.156024 systemd-journald[1140]: Collecting audit messages is disabled. Apr 30 00:01:07.156046 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:01:07.156058 systemd-journald[1140]: Journal started Apr 30 00:01:07.156086 systemd-journald[1140]: Runtime Journal (/run/log/journal/8fed4cb9e7bb4f1898e5fd72d559808e) is 5.9M, max 47.3M, 41.4M free. Apr 30 00:01:07.157310 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:01:07.157346 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:01:07.163309 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:01:07.164242 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:01:07.164434 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:01:07.165908 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:01:07.167373 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:01:07.167529 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:01:07.168979 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:01:07.169149 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:01:07.170655 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:01:07.170810 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:01:07.172510 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:01:07.172715 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:01:07.174285 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:01:07.175670 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:01:07.177242 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:01:07.192976 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:01:07.209468 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:01:07.212013 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:01:07.213214 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:01:07.217394 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:01:07.220074 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:01:07.221427 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:01:07.223046 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:01:07.224354 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:01:07.225916 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:01:07.231449 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:01:07.235646 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:01:07.238593 systemd-journald[1140]: Time spent on flushing to /var/log/journal/8fed4cb9e7bb4f1898e5fd72d559808e is 20.405ms for 846 entries. Apr 30 00:01:07.238593 systemd-journald[1140]: System Journal (/var/log/journal/8fed4cb9e7bb4f1898e5fd72d559808e) is 8.0M, max 195.6M, 187.6M free. Apr 30 00:01:07.276866 systemd-journald[1140]: Received client request to flush runtime journal. Apr 30 00:01:07.238808 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:01:07.241067 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:01:07.243152 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:01:07.247053 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:01:07.258522 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:01:07.260125 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:01:07.267876 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Apr 30 00:01:07.267887 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Apr 30 00:01:07.272170 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:01:07.275254 udevadm[1204]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 00:01:07.288610 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:01:07.290348 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:01:07.329538 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:01:07.348496 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:01:07.372922 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Apr 30 00:01:07.372943 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Apr 30 00:01:07.377874 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:01:07.690787 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:01:07.704639 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:01:07.727662 systemd-udevd[1223]: Using default interface naming scheme 'v255'. Apr 30 00:01:07.741157 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:01:07.751466 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:01:07.768510 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:01:07.773084 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Apr 30 00:01:07.787798 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (1230) Apr 30 00:01:07.810496 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:01:07.826652 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:01:07.882396 systemd-networkd[1233]: lo: Link UP Apr 30 00:01:07.882732 systemd-networkd[1233]: lo: Gained carrier Apr 30 00:01:07.883590 systemd-networkd[1233]: Enumeration completed Apr 30 00:01:07.884111 systemd-networkd[1233]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:01:07.884189 systemd-networkd[1233]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:01:07.884589 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:01:07.885300 systemd-networkd[1233]: eth0: Link UP Apr 30 00:01:07.885304 systemd-networkd[1233]: eth0: Gained carrier Apr 30 00:01:07.885317 systemd-networkd[1233]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:01:07.901542 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:01:07.904924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:01:07.910347 systemd-networkd[1233]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:01:07.915238 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:01:07.918401 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:01:07.949416 lvm[1262]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:01:07.958932 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:01:07.993881 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:01:07.995509 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:01:08.008426 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:01:08.013622 lvm[1269]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:01:08.048846 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:01:08.050454 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:01:08.051720 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:01:08.051763 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:01:08.052791 systemd[1]: Reached target machines.target - Containers. Apr 30 00:01:08.055046 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:01:08.067450 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:01:08.070030 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:01:08.071336 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:01:08.072407 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:01:08.075531 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:01:08.080993 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:01:08.083025 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:01:08.088370 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:01:08.093343 kernel: loop0: detected capacity change from 0 to 116808 Apr 30 00:01:08.095640 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:01:08.097048 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:01:08.104289 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:01:08.152302 kernel: loop1: detected capacity change from 0 to 194096 Apr 30 00:01:08.189290 kernel: loop2: detected capacity change from 0 to 113536 Apr 30 00:01:08.227303 kernel: loop3: detected capacity change from 0 to 116808 Apr 30 00:01:08.232329 kernel: loop4: detected capacity change from 0 to 194096 Apr 30 00:01:08.240294 kernel: loop5: detected capacity change from 0 to 113536 Apr 30 00:01:08.248049 (sd-merge)[1289]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 30 00:01:08.248480 (sd-merge)[1289]: Merged extensions into '/usr'. Apr 30 00:01:08.252168 systemd[1]: Reloading requested from client PID 1277 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:01:08.252184 systemd[1]: Reloading... Apr 30 00:01:08.296314 zram_generator::config[1316]: No configuration found. Apr 30 00:01:08.349811 ldconfig[1273]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:01:08.396938 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:01:08.441508 systemd[1]: Reloading finished in 188 ms. Apr 30 00:01:08.458654 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:01:08.460213 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:01:08.478470 systemd[1]: Starting ensure-sysext.service... Apr 30 00:01:08.480772 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:01:08.485699 systemd[1]: Reloading requested from client PID 1359 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:01:08.485715 systemd[1]: Reloading... Apr 30 00:01:08.502414 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:01:08.502747 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:01:08.503579 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:01:08.503799 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Apr 30 00:01:08.503845 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Apr 30 00:01:08.506362 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:01:08.506374 systemd-tmpfiles[1360]: Skipping /boot Apr 30 00:01:08.517913 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:01:08.517964 systemd-tmpfiles[1360]: Skipping /boot Apr 30 00:01:08.526364 zram_generator::config[1388]: No configuration found. Apr 30 00:01:08.624049 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:01:08.667134 systemd[1]: Reloading finished in 181 ms. Apr 30 00:01:08.686083 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:01:08.703891 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:01:08.706870 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:01:08.709630 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:01:08.715626 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:01:08.720455 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:01:08.725240 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:01:08.739748 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:01:08.743094 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:01:08.747559 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:01:08.749529 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:01:08.752112 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:01:08.755187 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:01:08.755368 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:01:08.757424 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:01:08.757583 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:01:08.759581 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:01:08.759789 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:01:08.776119 augenrules[1467]: No rules Apr 30 00:01:08.777721 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:01:08.779663 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:01:08.779906 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:01:08.784426 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:01:08.795581 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:01:08.796722 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:01:08.798025 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:01:08.802568 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:01:08.806879 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:01:08.809425 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:01:08.810573 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:01:08.812387 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:01:08.813451 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:01:08.814532 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:01:08.814677 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:01:08.816739 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:01:08.816894 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:01:08.817807 augenrules[1479]: /sbin/augenrules: No change Apr 30 00:01:08.818452 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:01:08.818602 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:01:08.821683 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:01:08.821888 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:01:08.825786 systemd[1]: Finished ensure-sysext.service. Apr 30 00:01:08.830388 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:01:08.830461 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:01:08.831560 augenrules[1512]: No rules Apr 30 00:01:08.834469 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 00:01:08.835937 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:01:08.836181 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:01:08.837533 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:01:08.840877 systemd-resolved[1435]: Positive Trust Anchors: Apr 30 00:01:08.840956 systemd-resolved[1435]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:01:08.840987 systemd-resolved[1435]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:01:08.846834 systemd-resolved[1435]: Defaulting to hostname 'linux'. Apr 30 00:01:08.848605 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:01:08.849768 systemd[1]: Reached target network.target - Network. Apr 30 00:01:08.850659 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:01:08.883247 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 00:01:09.296767 systemd-resolved[1435]: Clock change detected. Flushing caches. Apr 30 00:01:09.296885 systemd-timesyncd[1519]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 30 00:01:09.296940 systemd-timesyncd[1519]: Initial clock synchronization to Wed 2025-04-30 00:01:09.296713 UTC. Apr 30 00:01:09.297358 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:01:09.298546 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:01:09.299836 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:01:09.301080 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:01:09.302313 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:01:09.302347 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:01:09.303252 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:01:09.304412 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:01:09.305597 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:01:09.306836 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:01:09.308505 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:01:09.311131 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:01:09.313090 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:01:09.317740 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:01:09.318834 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:01:09.319832 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:01:09.320896 systemd[1]: System is tainted: cgroupsv1 Apr 30 00:01:09.320942 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:01:09.320961 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:01:09.322241 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:01:09.324494 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:01:09.327414 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:01:09.330706 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:01:09.336308 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:01:09.341806 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:01:09.352203 jq[1527]: false Apr 30 00:01:09.352385 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:01:09.356332 extend-filesystems[1529]: Found loop3 Apr 30 00:01:09.356332 extend-filesystems[1529]: Found loop4 Apr 30 00:01:09.356332 extend-filesystems[1529]: Found loop5 Apr 30 00:01:09.356332 extend-filesystems[1529]: Found vda Apr 30 00:01:09.356332 extend-filesystems[1529]: Found vda1 Apr 30 00:01:09.356332 extend-filesystems[1529]: Found vda2 Apr 30 00:01:09.356332 extend-filesystems[1529]: Found vda3 Apr 30 00:01:09.364005 extend-filesystems[1529]: Found usr Apr 30 00:01:09.364005 extend-filesystems[1529]: Found vda4 Apr 30 00:01:09.364005 extend-filesystems[1529]: Found vda6 Apr 30 00:01:09.364005 extend-filesystems[1529]: Found vda7 Apr 30 00:01:09.364005 extend-filesystems[1529]: Found vda9 Apr 30 00:01:09.364005 extend-filesystems[1529]: Checking size of /dev/vda9 Apr 30 00:01:09.366918 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:01:09.370642 dbus-daemon[1526]: [system] SELinux support is enabled Apr 30 00:01:09.371023 extend-filesystems[1529]: Resized partition /dev/vda9 Apr 30 00:01:09.373939 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:01:09.380710 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:01:09.382914 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (1234) Apr 30 00:01:09.384418 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:01:09.386541 extend-filesystems[1548]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:01:09.388353 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:01:09.393145 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:01:09.397777 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 30 00:01:09.398118 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:01:09.405144 jq[1554]: true Apr 30 00:01:09.407186 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:01:09.407452 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:01:09.407714 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:01:09.407946 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:01:09.411277 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:01:09.411630 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:01:09.435213 jq[1560]: true Apr 30 00:01:09.437803 (ntainerd)[1561]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:01:09.450703 systemd-logind[1550]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 00:01:09.452263 systemd-logind[1550]: New seat seat0. Apr 30 00:01:09.461238 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 30 00:01:09.464955 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:01:09.472695 tar[1558]: linux-arm64/helm Apr 30 00:01:09.472978 update_engine[1553]: I20250430 00:01:09.470901 1553 main.cc:92] Flatcar Update Engine starting Apr 30 00:01:09.475947 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:01:09.478396 extend-filesystems[1548]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 00:01:09.478396 extend-filesystems[1548]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 00:01:09.478396 extend-filesystems[1548]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 30 00:01:09.476112 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:01:09.489885 update_engine[1553]: I20250430 00:01:09.478413 1553 update_check_scheduler.cc:74] Next update check in 6m24s Apr 30 00:01:09.489938 extend-filesystems[1529]: Resized filesystem in /dev/vda9 Apr 30 00:01:09.477684 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:01:09.477804 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:01:09.484020 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:01:09.484269 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:01:09.488596 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:01:09.492037 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:01:09.502235 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:01:09.523353 bash[1590]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:01:09.525606 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:01:09.527430 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 00:01:09.604777 locksmithd[1589]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:01:09.672829 containerd[1561]: time="2025-04-30T00:01:09.672324832Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 00:01:09.698783 containerd[1561]: time="2025-04-30T00:01:09.698384312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:01:09.699913 containerd[1561]: time="2025-04-30T00:01:09.699879272Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:01:09.699913 containerd[1561]: time="2025-04-30T00:01:09.699910152Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:01:09.699984 containerd[1561]: time="2025-04-30T00:01:09.699925392Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:01:09.700088 containerd[1561]: time="2025-04-30T00:01:09.700068152Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:01:09.700115 containerd[1561]: time="2025-04-30T00:01:09.700090272Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:01:09.701061 containerd[1561]: time="2025-04-30T00:01:09.700147912Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:01:09.701061 containerd[1561]: time="2025-04-30T00:01:09.700162912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:01:09.701061 containerd[1561]: time="2025-04-30T00:01:09.700377232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:01:09.701061 containerd[1561]: time="2025-04-30T00:01:09.700393552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:01:09.701061 containerd[1561]: time="2025-04-30T00:01:09.700406392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:01:09.701061 containerd[1561]: time="2025-04-30T00:01:09.700415312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:01:09.701061 containerd[1561]: time="2025-04-30T00:01:09.700484032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:01:09.701061 containerd[1561]: time="2025-04-30T00:01:09.700675432Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:01:09.701061 containerd[1561]: time="2025-04-30T00:01:09.700811192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:01:09.701061 containerd[1561]: time="2025-04-30T00:01:09.700825072Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:01:09.701061 containerd[1561]: time="2025-04-30T00:01:09.700898712Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:01:09.701301 containerd[1561]: time="2025-04-30T00:01:09.700936672Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:01:09.704422 containerd[1561]: time="2025-04-30T00:01:09.704375272Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:01:09.704422 containerd[1561]: time="2025-04-30T00:01:09.704422472Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:01:09.704501 containerd[1561]: time="2025-04-30T00:01:09.704439072Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:01:09.704501 containerd[1561]: time="2025-04-30T00:01:09.704454552Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:01:09.704501 containerd[1561]: time="2025-04-30T00:01:09.704467592Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:01:09.704674 containerd[1561]: time="2025-04-30T00:01:09.704623072Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:01:09.705088 containerd[1561]: time="2025-04-30T00:01:09.705068792Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:01:09.705192 containerd[1561]: time="2025-04-30T00:01:09.705176072Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:01:09.705233 containerd[1561]: time="2025-04-30T00:01:09.705195312Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:01:09.705233 containerd[1561]: time="2025-04-30T00:01:09.705208712Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:01:09.705233 containerd[1561]: time="2025-04-30T00:01:09.705220672Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:01:09.705281 containerd[1561]: time="2025-04-30T00:01:09.705234032Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:01:09.705281 containerd[1561]: time="2025-04-30T00:01:09.705246232Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:01:09.705281 containerd[1561]: time="2025-04-30T00:01:09.705260392Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:01:09.705281 containerd[1561]: time="2025-04-30T00:01:09.705274672Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:01:09.705367 containerd[1561]: time="2025-04-30T00:01:09.705286152Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:01:09.705367 containerd[1561]: time="2025-04-30T00:01:09.705297832Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:01:09.705367 containerd[1561]: time="2025-04-30T00:01:09.705308312Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:01:09.705367 containerd[1561]: time="2025-04-30T00:01:09.705327152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:01:09.705367 containerd[1561]: time="2025-04-30T00:01:09.705339432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:01:09.705367 containerd[1561]: time="2025-04-30T00:01:09.705351512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:01:09.705367 containerd[1561]: time="2025-04-30T00:01:09.705364472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:01:09.705483 containerd[1561]: time="2025-04-30T00:01:09.705375552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:01:09.705483 containerd[1561]: time="2025-04-30T00:01:09.705388152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:01:09.705483 containerd[1561]: time="2025-04-30T00:01:09.705398432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:01:09.705483 containerd[1561]: time="2025-04-30T00:01:09.705410112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:01:09.705483 containerd[1561]: time="2025-04-30T00:01:09.705421432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:01:09.705483 containerd[1561]: time="2025-04-30T00:01:09.705434352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:01:09.705483 containerd[1561]: time="2025-04-30T00:01:09.705444952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:01:09.705483 containerd[1561]: time="2025-04-30T00:01:09.705456672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:01:09.705483 containerd[1561]: time="2025-04-30T00:01:09.705467392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:01:09.705483 containerd[1561]: time="2025-04-30T00:01:09.705485992Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:01:09.705659 containerd[1561]: time="2025-04-30T00:01:09.705506192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:01:09.705659 containerd[1561]: time="2025-04-30T00:01:09.705519112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:01:09.705659 containerd[1561]: time="2025-04-30T00:01:09.705529392Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:01:09.705848 containerd[1561]: time="2025-04-30T00:01:09.705708032Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:01:09.705848 containerd[1561]: time="2025-04-30T00:01:09.705743912Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:01:09.705848 containerd[1561]: time="2025-04-30T00:01:09.705763632Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:01:09.705848 containerd[1561]: time="2025-04-30T00:01:09.705775912Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:01:09.705848 containerd[1561]: time="2025-04-30T00:01:09.705784312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:01:09.705848 containerd[1561]: time="2025-04-30T00:01:09.705795272Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:01:09.705848 containerd[1561]: time="2025-04-30T00:01:09.705804552Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:01:09.705848 containerd[1561]: time="2025-04-30T00:01:09.705815712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:01:09.706724 containerd[1561]: time="2025-04-30T00:01:09.706130592Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:01:09.706724 containerd[1561]: time="2025-04-30T00:01:09.706180592Z" level=info msg="Connect containerd service" Apr 30 00:01:09.706724 containerd[1561]: time="2025-04-30T00:01:09.706207952Z" level=info msg="using legacy CRI server" Apr 30 00:01:09.706724 containerd[1561]: time="2025-04-30T00:01:09.706215112Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:01:09.706724 containerd[1561]: time="2025-04-30T00:01:09.706452872Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:01:09.707108 containerd[1561]: time="2025-04-30T00:01:09.707081672Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:01:09.707689 containerd[1561]: time="2025-04-30T00:01:09.707344472Z" level=info msg="Start subscribing containerd event" Apr 30 00:01:09.707689 containerd[1561]: time="2025-04-30T00:01:09.707401272Z" level=info msg="Start recovering state" Apr 30 00:01:09.707689 containerd[1561]: time="2025-04-30T00:01:09.707466432Z" level=info msg="Start event monitor" Apr 30 00:01:09.707689 containerd[1561]: time="2025-04-30T00:01:09.707478992Z" level=info msg="Start snapshots syncer" Apr 30 00:01:09.707689 containerd[1561]: time="2025-04-30T00:01:09.707488152Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:01:09.707689 containerd[1561]: time="2025-04-30T00:01:09.707495752Z" level=info msg="Start streaming server" Apr 30 00:01:09.707840 containerd[1561]: time="2025-04-30T00:01:09.707740432Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:01:09.708770 containerd[1561]: time="2025-04-30T00:01:09.707862112Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:01:09.708019 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:01:09.709791 containerd[1561]: time="2025-04-30T00:01:09.709743312Z" level=info msg="containerd successfully booted in 0.040135s" Apr 30 00:01:09.827169 tar[1558]: linux-arm64/LICENSE Apr 30 00:01:09.827267 tar[1558]: linux-arm64/README.md Apr 30 00:01:09.840652 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:01:09.998331 sshd_keygen[1555]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:01:10.018193 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:01:10.033031 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:01:10.038649 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:01:10.038935 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:01:10.041955 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:01:10.055068 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:01:10.058580 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:01:10.060911 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 30 00:01:10.062292 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:01:10.301894 systemd-networkd[1233]: eth0: Gained IPv6LL Apr 30 00:01:10.304465 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:01:10.306667 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:01:10.318062 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 30 00:01:10.320900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:01:10.323294 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:01:10.344198 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 30 00:01:10.344542 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 30 00:01:10.346857 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:01:10.349295 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:01:10.836796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:01:10.838430 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:01:10.841312 (kubelet)[1663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:01:10.843141 systemd[1]: Startup finished in 5.424s (kernel) + 3.916s (userspace) = 9.341s. Apr 30 00:01:11.359514 kubelet[1663]: E0430 00:01:11.359088 1663 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:01:11.362600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:01:11.362983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:01:14.772445 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:01:14.787051 systemd[1]: Started sshd@0-10.0.0.70:22-10.0.0.1:57600.service - OpenSSH per-connection server daemon (10.0.0.1:57600). Apr 30 00:01:14.858109 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 57600 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:01:14.860332 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:01:14.869898 systemd-logind[1550]: New session 1 of user core. Apr 30 00:01:14.870959 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:01:14.885135 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:01:14.896145 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:01:14.898536 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:01:14.905449 (systemd)[1683]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:01:15.011552 systemd[1683]: Queued start job for default target default.target. Apr 30 00:01:15.011982 systemd[1683]: Created slice app.slice - User Application Slice. Apr 30 00:01:15.012009 systemd[1683]: Reached target paths.target - Paths. Apr 30 00:01:15.012020 systemd[1683]: Reached target timers.target - Timers. Apr 30 00:01:15.025902 systemd[1683]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:01:15.032367 systemd[1683]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:01:15.032434 systemd[1683]: Reached target sockets.target - Sockets. Apr 30 00:01:15.032446 systemd[1683]: Reached target basic.target - Basic System. Apr 30 00:01:15.032484 systemd[1683]: Reached target default.target - Main User Target. Apr 30 00:01:15.032508 systemd[1683]: Startup finished in 117ms. Apr 30 00:01:15.032742 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:01:15.034315 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:01:15.091057 systemd[1]: Started sshd@1-10.0.0.70:22-10.0.0.1:57612.service - OpenSSH per-connection server daemon (10.0.0.1:57612). Apr 30 00:01:15.135522 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 57612 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:01:15.136822 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:01:15.141835 systemd-logind[1550]: New session 2 of user core. Apr 30 00:01:15.150141 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:01:15.205809 sshd[1698]: Connection closed by 10.0.0.1 port 57612 Apr 30 00:01:15.206329 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Apr 30 00:01:15.220162 systemd[1]: Started sshd@2-10.0.0.70:22-10.0.0.1:57618.service - OpenSSH per-connection server daemon (10.0.0.1:57618). Apr 30 00:01:15.220595 systemd[1]: sshd@1-10.0.0.70:22-10.0.0.1:57612.service: Deactivated successfully. Apr 30 00:01:15.223270 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:01:15.224620 systemd-logind[1550]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:01:15.225977 systemd-logind[1550]: Removed session 2. Apr 30 00:01:15.259479 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 57618 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:01:15.260771 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:01:15.265196 systemd-logind[1550]: New session 3 of user core. Apr 30 00:01:15.278042 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:01:15.327846 sshd[1706]: Connection closed by 10.0.0.1 port 57618 Apr 30 00:01:15.329030 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Apr 30 00:01:15.338099 systemd[1]: Started sshd@3-10.0.0.70:22-10.0.0.1:57624.service - OpenSSH per-connection server daemon (10.0.0.1:57624). Apr 30 00:01:15.338494 systemd[1]: sshd@2-10.0.0.70:22-10.0.0.1:57618.service: Deactivated successfully. Apr 30 00:01:15.341067 systemd-logind[1550]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:01:15.341950 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:01:15.343503 systemd-logind[1550]: Removed session 3. Apr 30 00:01:15.376388 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 57624 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:01:15.377705 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:01:15.383711 systemd-logind[1550]: New session 4 of user core. Apr 30 00:01:15.398118 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:01:15.454769 sshd[1714]: Connection closed by 10.0.0.1 port 57624 Apr 30 00:01:15.455361 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Apr 30 00:01:15.466069 systemd[1]: Started sshd@4-10.0.0.70:22-10.0.0.1:57626.service - OpenSSH per-connection server daemon (10.0.0.1:57626). Apr 30 00:01:15.466602 systemd[1]: sshd@3-10.0.0.70:22-10.0.0.1:57624.service: Deactivated successfully. Apr 30 00:01:15.469290 systemd-logind[1550]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:01:15.469971 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:01:15.471843 systemd-logind[1550]: Removed session 4. Apr 30 00:01:15.501607 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 57626 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:01:15.503111 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:01:15.507069 systemd-logind[1550]: New session 5 of user core. Apr 30 00:01:15.527122 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:01:15.592183 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:01:15.592458 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:01:15.954023 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:01:15.954298 (dockerd)[1743]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:01:16.261880 dockerd[1743]: time="2025-04-30T00:01:16.261504232Z" level=info msg="Starting up" Apr 30 00:01:16.526082 dockerd[1743]: time="2025-04-30T00:01:16.525729872Z" level=info msg="Loading containers: start." Apr 30 00:01:16.683779 kernel: Initializing XFRM netlink socket Apr 30 00:01:16.756629 systemd-networkd[1233]: docker0: Link UP Apr 30 00:01:16.822364 dockerd[1743]: time="2025-04-30T00:01:16.822132472Z" level=info msg="Loading containers: done." Apr 30 00:01:16.834434 dockerd[1743]: time="2025-04-30T00:01:16.834031552Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:01:16.834434 dockerd[1743]: time="2025-04-30T00:01:16.834131032Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Apr 30 00:01:16.834434 dockerd[1743]: time="2025-04-30T00:01:16.834243392Z" level=info msg="Daemon has completed initialization" Apr 30 00:01:16.868681 dockerd[1743]: time="2025-04-30T00:01:16.868625352Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:01:16.868849 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:01:17.665593 containerd[1561]: time="2025-04-30T00:01:17.665486952Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 00:01:18.424423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3677144029.mount: Deactivated successfully. Apr 30 00:01:19.677409 containerd[1561]: time="2025-04-30T00:01:19.677359512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:19.678400 containerd[1561]: time="2025-04-30T00:01:19.678200512Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" Apr 30 00:01:19.681044 containerd[1561]: time="2025-04-30T00:01:19.680983392Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:19.685593 containerd[1561]: time="2025-04-30T00:01:19.685545872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:19.686816 containerd[1561]: time="2025-04-30T00:01:19.686770832Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.02120796s" Apr 30 00:01:19.686869 containerd[1561]: time="2025-04-30T00:01:19.686816072Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" Apr 30 00:01:19.705974 containerd[1561]: time="2025-04-30T00:01:19.705920552Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 00:01:21.433794 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:01:21.446996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:01:21.551039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:01:21.553964 (kubelet)[2021]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:01:21.679453 containerd[1561]: time="2025-04-30T00:01:21.679403192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:21.680612 containerd[1561]: time="2025-04-30T00:01:21.680573992Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" Apr 30 00:01:21.682209 containerd[1561]: time="2025-04-30T00:01:21.682177392Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:21.684957 containerd[1561]: time="2025-04-30T00:01:21.684857632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:21.687229 containerd[1561]: time="2025-04-30T00:01:21.687186552Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.98122004s" Apr 30 00:01:21.687306 containerd[1561]: time="2025-04-30T00:01:21.687228752Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" Apr 30 00:01:21.704122 kubelet[2021]: E0430 00:01:21.704039 2021 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:01:21.707177 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:01:21.707373 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:01:21.709993 containerd[1561]: time="2025-04-30T00:01:21.709958792Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 00:01:22.657811 containerd[1561]: time="2025-04-30T00:01:22.657743952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:22.659000 containerd[1561]: time="2025-04-30T00:01:22.658962312Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" Apr 30 00:01:22.659825 containerd[1561]: time="2025-04-30T00:01:22.659791952Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:22.663311 containerd[1561]: time="2025-04-30T00:01:22.663274432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:22.664005 containerd[1561]: time="2025-04-30T00:01:22.663975072Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 953.87436ms" Apr 30 00:01:22.664079 containerd[1561]: time="2025-04-30T00:01:22.664003872Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" Apr 30 00:01:22.682938 containerd[1561]: time="2025-04-30T00:01:22.682885032Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 00:01:23.555296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1216699585.mount: Deactivated successfully. Apr 30 00:01:23.888341 containerd[1561]: time="2025-04-30T00:01:23.888095192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:23.889357 containerd[1561]: time="2025-04-30T00:01:23.889305952Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" Apr 30 00:01:23.890244 containerd[1561]: time="2025-04-30T00:01:23.890204432Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:23.892262 containerd[1561]: time="2025-04-30T00:01:23.892231232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:23.892976 containerd[1561]: time="2025-04-30T00:01:23.892900552Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.20995932s" Apr 30 00:01:23.892976 containerd[1561]: time="2025-04-30T00:01:23.892934912Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" Apr 30 00:01:23.911764 containerd[1561]: time="2025-04-30T00:01:23.911726832Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 00:01:24.490910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3831975381.mount: Deactivated successfully. Apr 30 00:01:25.093851 containerd[1561]: time="2025-04-30T00:01:25.093804112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:25.096715 containerd[1561]: time="2025-04-30T00:01:25.096669232Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Apr 30 00:01:25.097850 containerd[1561]: time="2025-04-30T00:01:25.097823152Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:25.101777 containerd[1561]: time="2025-04-30T00:01:25.101711032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:25.102986 containerd[1561]: time="2025-04-30T00:01:25.102860992Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.19108772s" Apr 30 00:01:25.102986 containerd[1561]: time="2025-04-30T00:01:25.102898592Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 30 00:01:25.120979 containerd[1561]: time="2025-04-30T00:01:25.120933512Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 00:01:25.711661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1436347053.mount: Deactivated successfully. Apr 30 00:01:25.717248 containerd[1561]: time="2025-04-30T00:01:25.717011112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:25.717831 containerd[1561]: time="2025-04-30T00:01:25.717587352Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Apr 30 00:01:25.719294 containerd[1561]: time="2025-04-30T00:01:25.719207792Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:25.726013 containerd[1561]: time="2025-04-30T00:01:25.725968552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:25.726767 containerd[1561]: time="2025-04-30T00:01:25.726675672Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 605.70692ms" Apr 30 00:01:25.726767 containerd[1561]: time="2025-04-30T00:01:25.726703072Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 30 00:01:25.747071 containerd[1561]: time="2025-04-30T00:01:25.747040592Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 00:01:26.252337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount124056354.mount: Deactivated successfully. Apr 30 00:01:28.528309 containerd[1561]: time="2025-04-30T00:01:28.528254312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:28.529724 containerd[1561]: time="2025-04-30T00:01:28.529681072Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Apr 30 00:01:28.530577 containerd[1561]: time="2025-04-30T00:01:28.530525952Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:28.534306 containerd[1561]: time="2025-04-30T00:01:28.534274032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:01:28.536635 containerd[1561]: time="2025-04-30T00:01:28.536494112Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.78941864s" Apr 30 00:01:28.536635 containerd[1561]: time="2025-04-30T00:01:28.536532992Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Apr 30 00:01:31.933739 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 00:01:31.941937 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:01:32.143139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:01:32.148265 (kubelet)[2255]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:01:32.186339 kubelet[2255]: E0430 00:01:32.186199 2255 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:01:32.188907 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:01:32.189103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:01:33.801699 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:01:33.814960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:01:33.832921 systemd[1]: Reloading requested from client PID 2273 ('systemctl') (unit session-5.scope)... Apr 30 00:01:33.832938 systemd[1]: Reloading... Apr 30 00:01:33.904772 zram_generator::config[2318]: No configuration found. Apr 30 00:01:34.004235 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:01:34.057696 systemd[1]: Reloading finished in 224 ms. Apr 30 00:01:34.097915 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 00:01:34.097978 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 00:01:34.098224 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:01:34.100634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:01:34.188848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:01:34.192696 (kubelet)[2370]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:01:34.234189 kubelet[2370]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:01:34.234189 kubelet[2370]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:01:34.234189 kubelet[2370]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:01:34.235085 kubelet[2370]: I0430 00:01:34.235043 2370 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:01:35.488473 kubelet[2370]: I0430 00:01:35.488407 2370 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:01:35.488473 kubelet[2370]: I0430 00:01:35.488460 2370 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:01:35.488868 kubelet[2370]: I0430 00:01:35.488667 2370 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:01:35.545871 kubelet[2370]: E0430 00:01:35.545827 2370 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.70:6443: connect: connection refused Apr 30 00:01:35.545997 kubelet[2370]: I0430 00:01:35.545969 2370 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:01:35.555492 kubelet[2370]: I0430 00:01:35.555458 2370 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:01:35.556823 kubelet[2370]: I0430 00:01:35.556766 2370 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:01:35.556992 kubelet[2370]: I0430 00:01:35.556816 2370 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:01:35.557067 kubelet[2370]: I0430 00:01:35.557053 2370 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:01:35.557067 kubelet[2370]: I0430 00:01:35.557063 2370 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:01:35.557325 kubelet[2370]: I0430 00:01:35.557303 2370 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:01:35.558448 kubelet[2370]: I0430 00:01:35.558372 2370 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:01:35.558448 kubelet[2370]: I0430 00:01:35.558395 2370 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:01:35.558787 kubelet[2370]: I0430 00:01:35.558775 2370 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:01:35.559160 kubelet[2370]: I0430 00:01:35.558977 2370 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:01:35.559160 kubelet[2370]: W0430 00:01:35.559078 2370 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Apr 30 00:01:35.559160 kubelet[2370]: E0430 00:01:35.559134 2370 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Apr 30 00:01:35.559520 kubelet[2370]: W0430 00:01:35.559484 2370 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Apr 30 00:01:35.559571 kubelet[2370]: E0430 00:01:35.559528 2370 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Apr 30 00:01:35.560196 kubelet[2370]: I0430 00:01:35.560164 2370 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 00:01:35.560691 kubelet[2370]: I0430 00:01:35.560660 2370 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:01:35.560876 kubelet[2370]: W0430 00:01:35.560830 2370 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:01:35.562401 kubelet[2370]: I0430 00:01:35.562133 2370 server.go:1264] "Started kubelet" Apr 30 00:01:35.562653 kubelet[2370]: I0430 00:01:35.562619 2370 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:01:35.564343 kubelet[2370]: I0430 00:01:35.564318 2370 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:01:35.566941 kubelet[2370]: I0430 00:01:35.566918 2370 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:01:35.570575 kubelet[2370]: I0430 00:01:35.570507 2370 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:01:35.570881 kubelet[2370]: I0430 00:01:35.570735 2370 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:01:35.571615 kubelet[2370]: I0430 00:01:35.571478 2370 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:01:35.573119 kubelet[2370]: I0430 00:01:35.573090 2370 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:01:35.574121 kubelet[2370]: I0430 00:01:35.574105 2370 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:01:35.575813 kubelet[2370]: W0430 00:01:35.575594 2370 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Apr 30 00:01:35.575813 kubelet[2370]: E0430 00:01:35.575652 2370 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Apr 30 00:01:35.575948 kubelet[2370]: E0430 00:01:35.575917 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="200ms" Apr 30 00:01:35.576839 kubelet[2370]: I0430 00:01:35.576704 2370 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:01:35.576957 kubelet[2370]: I0430 00:01:35.576929 2370 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:01:35.580797 kubelet[2370]: E0430 00:01:35.575712 2370 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.70:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.70:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183aefa48c020c80 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-04-30 00:01:35.562108032 +0000 UTC m=+1.366272881,LastTimestamp:2025-04-30 00:01:35.562108032 +0000 UTC m=+1.366272881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 30 00:01:35.581443 kubelet[2370]: I0430 00:01:35.581206 2370 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:01:35.582089 kubelet[2370]: E0430 00:01:35.581998 2370 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:01:35.588494 kubelet[2370]: I0430 00:01:35.586955 2370 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:01:35.588494 kubelet[2370]: I0430 00:01:35.587897 2370 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:01:35.588494 kubelet[2370]: I0430 00:01:35.587926 2370 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:01:35.588494 kubelet[2370]: I0430 00:01:35.587943 2370 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:01:35.588494 kubelet[2370]: E0430 00:01:35.587987 2370 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:01:35.590234 kubelet[2370]: W0430 00:01:35.590201 2370 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Apr 30 00:01:35.590318 kubelet[2370]: E0430 00:01:35.590244 2370 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Apr 30 00:01:35.599504 kubelet[2370]: I0430 00:01:35.599484 2370 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:01:35.599504 kubelet[2370]: I0430 00:01:35.599500 2370 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:01:35.599598 kubelet[2370]: I0430 00:01:35.599517 2370 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:01:35.660815 kubelet[2370]: I0430 00:01:35.660783 2370 policy_none.go:49] "None policy: Start" Apr 30 00:01:35.661635 kubelet[2370]: I0430 00:01:35.661614 2370 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:01:35.661737 kubelet[2370]: I0430 00:01:35.661671 2370 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:01:35.666251 kubelet[2370]: I0430 00:01:35.666224 2370 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:01:35.666507 kubelet[2370]: I0430 00:01:35.666393 2370 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:01:35.666555 kubelet[2370]: I0430 00:01:35.666511 2370 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:01:35.667684 kubelet[2370]: E0430 00:01:35.667664 2370 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 30 00:01:35.672784 kubelet[2370]: I0430 00:01:35.672761 2370 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:01:35.673150 kubelet[2370]: E0430 00:01:35.673115 2370 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Apr 30 00:01:35.688483 kubelet[2370]: I0430 00:01:35.688418 2370 topology_manager.go:215] "Topology Admit Handler" podUID="890ee4f10a25165c6610ae37164fae16" podNamespace="kube-system" podName="kube-apiserver-localhost" Apr 30 00:01:35.689345 kubelet[2370]: I0430 00:01:35.689323 2370 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" Apr 30 00:01:35.690246 kubelet[2370]: I0430 00:01:35.690208 2370 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" Apr 30 00:01:35.774599 kubelet[2370]: I0430 00:01:35.774398 2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" Apr 30 00:01:35.774599 kubelet[2370]: I0430 00:01:35.774444 2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:01:35.774599 kubelet[2370]: I0430 00:01:35.774469 2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:01:35.774599 kubelet[2370]: I0430 00:01:35.774518 2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:01:35.774599 kubelet[2370]: I0430 00:01:35.774563 2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:01:35.774795 kubelet[2370]: I0430 00:01:35.774582 2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:01:35.774795 kubelet[2370]: I0430 00:01:35.774599 2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/890ee4f10a25165c6610ae37164fae16-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"890ee4f10a25165c6610ae37164fae16\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:01:35.774795 kubelet[2370]: I0430 00:01:35.774617 2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/890ee4f10a25165c6610ae37164fae16-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"890ee4f10a25165c6610ae37164fae16\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:01:35.774795 kubelet[2370]: I0430 00:01:35.774631 2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/890ee4f10a25165c6610ae37164fae16-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"890ee4f10a25165c6610ae37164fae16\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:01:35.776878 kubelet[2370]: E0430 00:01:35.776836 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="400ms" Apr 30 00:01:35.875424 kubelet[2370]: I0430 00:01:35.875396 2370 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:01:35.875821 kubelet[2370]: E0430 00:01:35.875783 2370 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Apr 30 00:01:35.994503 kubelet[2370]: E0430 00:01:35.994458 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:35.994779 kubelet[2370]: E0430 00:01:35.994747 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:35.995259 containerd[1561]: time="2025-04-30T00:01:35.995227592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" Apr 30 00:01:35.995571 containerd[1561]: time="2025-04-30T00:01:35.995235792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:890ee4f10a25165c6610ae37164fae16,Namespace:kube-system,Attempt:0,}" Apr 30 00:01:35.996259 kubelet[2370]: E0430 00:01:35.996237 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:35.996858 containerd[1561]: time="2025-04-30T00:01:35.996571392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" Apr 30 00:01:36.177498 kubelet[2370]: E0430 00:01:36.177378 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="800ms" Apr 30 00:01:36.279222 kubelet[2370]: I0430 00:01:36.278119 2370 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:01:36.279222 kubelet[2370]: E0430 00:01:36.278417 2370 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Apr 30 00:01:36.512069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2668787418.mount: Deactivated successfully. Apr 30 00:01:36.517562 containerd[1561]: time="2025-04-30T00:01:36.517514952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:01:36.519432 containerd[1561]: time="2025-04-30T00:01:36.519388912Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Apr 30 00:01:36.520365 containerd[1561]: time="2025-04-30T00:01:36.520325352Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:01:36.521154 containerd[1561]: time="2025-04-30T00:01:36.521123672Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:01:36.522707 containerd[1561]: time="2025-04-30T00:01:36.522673872Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:01:36.523507 containerd[1561]: time="2025-04-30T00:01:36.523465032Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:01:36.524105 containerd[1561]: time="2025-04-30T00:01:36.524068392Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:01:36.524718 containerd[1561]: time="2025-04-30T00:01:36.524681112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:01:36.526472 containerd[1561]: time="2025-04-30T00:01:36.526419752Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 531.08632ms" Apr 30 00:01:36.527581 containerd[1561]: time="2025-04-30T00:01:36.527532072Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 530.90404ms" Apr 30 00:01:36.533164 containerd[1561]: time="2025-04-30T00:01:36.533022072Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 537.61276ms" Apr 30 00:01:36.546669 kubelet[2370]: W0430 00:01:36.546594 2370 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Apr 30 00:01:36.546669 kubelet[2370]: E0430 00:01:36.546662 2370 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Apr 30 00:01:36.697995 containerd[1561]: time="2025-04-30T00:01:36.697770232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:01:36.697995 containerd[1561]: time="2025-04-30T00:01:36.697841712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:01:36.697995 containerd[1561]: time="2025-04-30T00:01:36.697856552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:01:36.698669 containerd[1561]: time="2025-04-30T00:01:36.698596672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:01:36.700267 containerd[1561]: time="2025-04-30T00:01:36.699597072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:01:36.700267 containerd[1561]: time="2025-04-30T00:01:36.699748632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:01:36.700267 containerd[1561]: time="2025-04-30T00:01:36.699786792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:01:36.700400 containerd[1561]: time="2025-04-30T00:01:36.699882632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:01:36.701003 containerd[1561]: time="2025-04-30T00:01:36.700916072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:01:36.701085 containerd[1561]: time="2025-04-30T00:01:36.700994192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:01:36.701085 containerd[1561]: time="2025-04-30T00:01:36.701052032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:01:36.701743 containerd[1561]: time="2025-04-30T00:01:36.701221992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:01:36.748456 containerd[1561]: time="2025-04-30T00:01:36.748357792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca0dbbce130e05c044ff19eda024249596469ce7720f58d89293dacd508de950\"" Apr 30 00:01:36.749384 kubelet[2370]: E0430 00:01:36.749356 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:36.752849 containerd[1561]: time="2025-04-30T00:01:36.752762592Z" level=info msg="CreateContainer within sandbox \"ca0dbbce130e05c044ff19eda024249596469ce7720f58d89293dacd508de950\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:01:36.754187 containerd[1561]: time="2025-04-30T00:01:36.754164672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8400ec76cdee0cfe2b6a95a3ee51798140db62c53847f07f82482a3727aa587\"" Apr 30 00:01:36.754999 kubelet[2370]: E0430 00:01:36.754960 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:36.756256 containerd[1561]: time="2025-04-30T00:01:36.756224072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:890ee4f10a25165c6610ae37164fae16,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1bd12f192e5d2d14895064d9071347402223546bdeea8ebc8a1cf4e9195d065\"" Apr 30 00:01:36.756854 kubelet[2370]: E0430 00:01:36.756830 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:36.757035 containerd[1561]: time="2025-04-30T00:01:36.756828352Z" level=info msg="CreateContainer within sandbox \"a8400ec76cdee0cfe2b6a95a3ee51798140db62c53847f07f82482a3727aa587\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:01:36.759355 containerd[1561]: time="2025-04-30T00:01:36.759327352Z" level=info msg="CreateContainer within sandbox \"b1bd12f192e5d2d14895064d9071347402223546bdeea8ebc8a1cf4e9195d065\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:01:36.771438 containerd[1561]: time="2025-04-30T00:01:36.770437152Z" level=info msg="CreateContainer within sandbox \"ca0dbbce130e05c044ff19eda024249596469ce7720f58d89293dacd508de950\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"90a2ac5e931ebde07024ff8cafa9352e22e8771ca522a0b7c0722d228c307aa5\"" Apr 30 00:01:36.771438 containerd[1561]: time="2025-04-30T00:01:36.771211472Z" level=info msg="StartContainer for \"90a2ac5e931ebde07024ff8cafa9352e22e8771ca522a0b7c0722d228c307aa5\"" Apr 30 00:01:36.775635 containerd[1561]: time="2025-04-30T00:01:36.775521392Z" level=info msg="CreateContainer within sandbox \"a8400ec76cdee0cfe2b6a95a3ee51798140db62c53847f07f82482a3727aa587\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"022bbf7d2c418c9700c8c3f716da14bd13dbb4dde40af6de2d42f192faa4937d\"" Apr 30 00:01:36.776084 containerd[1561]: time="2025-04-30T00:01:36.776042832Z" level=info msg="StartContainer for \"022bbf7d2c418c9700c8c3f716da14bd13dbb4dde40af6de2d42f192faa4937d\"" Apr 30 00:01:36.778012 containerd[1561]: time="2025-04-30T00:01:36.777890792Z" level=info msg="CreateContainer within sandbox \"b1bd12f192e5d2d14895064d9071347402223546bdeea8ebc8a1cf4e9195d065\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3295e6661ce9114bc7ddd93c804453a36de057ad80ff500af95f267f59690518\"" Apr 30 00:01:36.779746 containerd[1561]: time="2025-04-30T00:01:36.779596072Z" level=info msg="StartContainer for \"3295e6661ce9114bc7ddd93c804453a36de057ad80ff500af95f267f59690518\"" Apr 30 00:01:36.889557 containerd[1561]: time="2025-04-30T00:01:36.889483592Z" level=info msg="StartContainer for \"022bbf7d2c418c9700c8c3f716da14bd13dbb4dde40af6de2d42f192faa4937d\" returns successfully" Apr 30 00:01:36.889660 containerd[1561]: time="2025-04-30T00:01:36.889644352Z" level=info msg="StartContainer for \"3295e6661ce9114bc7ddd93c804453a36de057ad80ff500af95f267f59690518\" returns successfully" Apr 30 00:01:36.889683 containerd[1561]: time="2025-04-30T00:01:36.889669832Z" level=info msg="StartContainer for \"90a2ac5e931ebde07024ff8cafa9352e22e8771ca522a0b7c0722d228c307aa5\" returns successfully" Apr 30 00:01:36.948008 kubelet[2370]: W0430 00:01:36.947884 2370 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Apr 30 00:01:36.948008 kubelet[2370]: E0430 00:01:36.947952 2370 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Apr 30 00:01:36.961697 kubelet[2370]: W0430 00:01:36.961609 2370 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Apr 30 00:01:36.961697 kubelet[2370]: E0430 00:01:36.961679 2370 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Apr 30 00:01:36.978222 kubelet[2370]: E0430 00:01:36.978160 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="1.6s" Apr 30 00:01:37.082446 kubelet[2370]: I0430 00:01:37.082208 2370 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:01:37.604578 kubelet[2370]: E0430 00:01:37.604552 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:37.610521 kubelet[2370]: E0430 00:01:37.610488 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:37.613293 kubelet[2370]: E0430 00:01:37.613266 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:38.592879 kubelet[2370]: I0430 00:01:38.592841 2370 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Apr 30 00:01:38.608082 kubelet[2370]: E0430 00:01:38.608045 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:01:38.614299 kubelet[2370]: E0430 00:01:38.614229 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:38.708495 kubelet[2370]: E0430 00:01:38.708450 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:01:38.809001 kubelet[2370]: E0430 00:01:38.808958 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:01:38.909715 kubelet[2370]: E0430 00:01:38.909594 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:01:39.010363 kubelet[2370]: E0430 00:01:39.010315 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:01:39.111142 kubelet[2370]: E0430 00:01:39.111097 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:01:39.211822 kubelet[2370]: E0430 00:01:39.211783 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:01:39.562466 kubelet[2370]: I0430 00:01:39.562340 2370 apiserver.go:52] "Watching apiserver" Apr 30 00:01:39.573887 kubelet[2370]: I0430 00:01:39.573831 2370 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:01:39.725686 kubelet[2370]: E0430 00:01:39.725619 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:40.616841 kubelet[2370]: E0430 00:01:40.616813 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:40.636592 systemd[1]: Reloading requested from client PID 2649 ('systemctl') (unit session-5.scope)... Apr 30 00:01:40.636607 systemd[1]: Reloading... Apr 30 00:01:40.694801 zram_generator::config[2688]: No configuration found. Apr 30 00:01:40.806115 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:01:40.867856 systemd[1]: Reloading finished in 230 ms. Apr 30 00:01:40.903651 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:01:40.918774 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:01:40.919100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:01:40.931253 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:01:41.022859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:01:41.028216 (kubelet)[2740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:01:41.067795 kubelet[2740]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:01:41.067795 kubelet[2740]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:01:41.067795 kubelet[2740]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:01:41.068191 kubelet[2740]: I0430 00:01:41.067851 2740 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:01:41.072130 kubelet[2740]: I0430 00:01:41.072088 2740 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:01:41.072814 kubelet[2740]: I0430 00:01:41.072278 2740 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:01:41.072814 kubelet[2740]: I0430 00:01:41.072509 2740 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:01:41.073979 kubelet[2740]: I0430 00:01:41.073955 2740 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:01:41.076105 kubelet[2740]: I0430 00:01:41.076073 2740 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:01:41.083217 kubelet[2740]: I0430 00:01:41.083189 2740 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:01:41.083761 kubelet[2740]: I0430 00:01:41.083700 2740 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:01:41.084014 kubelet[2740]: I0430 00:01:41.083743 2740 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:01:41.084113 kubelet[2740]: I0430 00:01:41.084023 2740 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:01:41.084113 kubelet[2740]: I0430 00:01:41.084031 2740 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:01:41.084113 kubelet[2740]: I0430 00:01:41.084069 2740 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:01:41.084225 kubelet[2740]: I0430 00:01:41.084210 2740 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:01:41.084225 kubelet[2740]: I0430 00:01:41.084226 2740 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:01:41.084277 kubelet[2740]: I0430 00:01:41.084252 2740 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:01:41.084277 kubelet[2740]: I0430 00:01:41.084270 2740 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:01:41.088773 kubelet[2740]: I0430 00:01:41.085175 2740 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 00:01:41.088773 kubelet[2740]: I0430 00:01:41.085635 2740 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:01:41.088773 kubelet[2740]: I0430 00:01:41.086259 2740 server.go:1264] "Started kubelet" Apr 30 00:01:41.088773 kubelet[2740]: I0430 00:01:41.086546 2740 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:01:41.088773 kubelet[2740]: I0430 00:01:41.086827 2740 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:01:41.088773 kubelet[2740]: I0430 00:01:41.087174 2740 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:01:41.088773 kubelet[2740]: I0430 00:01:41.087509 2740 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:01:41.097252 kubelet[2740]: I0430 00:01:41.089601 2740 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:01:41.097252 kubelet[2740]: E0430 00:01:41.090999 2740 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:01:41.097252 kubelet[2740]: I0430 00:01:41.091034 2740 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:01:41.097252 kubelet[2740]: I0430 00:01:41.091131 2740 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:01:41.097252 kubelet[2740]: I0430 00:01:41.091277 2740 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:01:41.097252 kubelet[2740]: E0430 00:01:41.092852 2740 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:01:41.106490 kubelet[2740]: I0430 00:01:41.106435 2740 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:01:41.106490 kubelet[2740]: I0430 00:01:41.106468 2740 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:01:41.106623 kubelet[2740]: I0430 00:01:41.106591 2740 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:01:41.130197 kubelet[2740]: I0430 00:01:41.129944 2740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:01:41.131742 kubelet[2740]: I0430 00:01:41.131670 2740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:01:41.131742 kubelet[2740]: I0430 00:01:41.131709 2740 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:01:41.132046 kubelet[2740]: I0430 00:01:41.132030 2740 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:01:41.132179 kubelet[2740]: E0430 00:01:41.132158 2740 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:01:41.160575 kubelet[2740]: I0430 00:01:41.160529 2740 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:01:41.160575 kubelet[2740]: I0430 00:01:41.160557 2740 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:01:41.160575 kubelet[2740]: I0430 00:01:41.160583 2740 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:01:41.162039 kubelet[2740]: I0430 00:01:41.160806 2740 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:01:41.162039 kubelet[2740]: I0430 00:01:41.160825 2740 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:01:41.162039 kubelet[2740]: I0430 00:01:41.160852 2740 policy_none.go:49] "None policy: Start" Apr 30 00:01:41.163216 kubelet[2740]: I0430 00:01:41.162878 2740 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:01:41.163216 kubelet[2740]: I0430 00:01:41.162909 2740 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:01:41.163216 kubelet[2740]: I0430 00:01:41.163097 2740 state_mem.go:75] "Updated machine memory state" Apr 30 00:01:41.164366 kubelet[2740]: I0430 00:01:41.164338 2740 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:01:41.164573 kubelet[2740]: I0430 00:01:41.164529 2740 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:01:41.164661 kubelet[2740]: I0430 00:01:41.164639 2740 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:01:41.195171 kubelet[2740]: I0430 00:01:41.195121 2740 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:01:41.202239 kubelet[2740]: I0430 00:01:41.202202 2740 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Apr 30 00:01:41.202347 kubelet[2740]: I0430 00:01:41.202312 2740 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Apr 30 00:01:41.233553 kubelet[2740]: I0430 00:01:41.233504 2740 topology_manager.go:215] "Topology Admit Handler" podUID="890ee4f10a25165c6610ae37164fae16" podNamespace="kube-system" podName="kube-apiserver-localhost" Apr 30 00:01:41.233947 kubelet[2740]: I0430 00:01:41.233818 2740 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" Apr 30 00:01:41.233947 kubelet[2740]: I0430 00:01:41.233885 2740 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" Apr 30 00:01:41.255383 kubelet[2740]: E0430 00:01:41.255330 2740 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 30 00:01:41.392646 kubelet[2740]: I0430 00:01:41.392499 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/890ee4f10a25165c6610ae37164fae16-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"890ee4f10a25165c6610ae37164fae16\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:01:41.392646 kubelet[2740]: I0430 00:01:41.392550 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:01:41.392646 kubelet[2740]: I0430 00:01:41.392573 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:01:41.392646 kubelet[2740]: I0430 00:01:41.392594 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:01:41.392646 kubelet[2740]: I0430 00:01:41.392617 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/890ee4f10a25165c6610ae37164fae16-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"890ee4f10a25165c6610ae37164fae16\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:01:41.392882 kubelet[2740]: I0430 00:01:41.392642 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:01:41.392882 kubelet[2740]: I0430 00:01:41.392662 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:01:41.392882 kubelet[2740]: I0430 00:01:41.392703 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" Apr 30 00:01:41.392882 kubelet[2740]: I0430 00:01:41.392745 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/890ee4f10a25165c6610ae37164fae16-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"890ee4f10a25165c6610ae37164fae16\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:01:41.555453 kubelet[2740]: E0430 00:01:41.555386 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:41.556151 kubelet[2740]: E0430 00:01:41.556104 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:41.556403 kubelet[2740]: E0430 00:01:41.556377 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:42.085034 kubelet[2740]: I0430 00:01:42.084797 2740 apiserver.go:52] "Watching apiserver" Apr 30 00:01:42.091296 kubelet[2740]: I0430 00:01:42.091256 2740 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:01:42.147928 kubelet[2740]: E0430 00:01:42.147865 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:42.151377 kubelet[2740]: E0430 00:01:42.151331 2740 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 30 00:01:42.152884 kubelet[2740]: E0430 00:01:42.151746 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:42.154082 kubelet[2740]: E0430 00:01:42.153836 2740 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 30 00:01:42.155100 kubelet[2740]: E0430 00:01:42.155080 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:42.165980 kubelet[2740]: I0430 00:01:42.165845 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.165813075 podStartE2EDuration="1.165813075s" podCreationTimestamp="2025-04-30 00:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:01:42.164966792 +0000 UTC m=+1.133367081" watchObservedRunningTime="2025-04-30 00:01:42.165813075 +0000 UTC m=+1.134213404" Apr 30 00:01:42.180677 kubelet[2740]: I0430 00:01:42.180618 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.180600337 podStartE2EDuration="3.180600337s" podCreationTimestamp="2025-04-30 00:01:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:01:42.173342186 +0000 UTC m=+1.141742475" watchObservedRunningTime="2025-04-30 00:01:42.180600337 +0000 UTC m=+1.149000626" Apr 30 00:01:42.180845 kubelet[2740]: I0430 00:01:42.180699 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.180695217 podStartE2EDuration="1.180695217s" podCreationTimestamp="2025-04-30 00:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:01:42.180512336 +0000 UTC m=+1.148912625" watchObservedRunningTime="2025-04-30 00:01:42.180695217 +0000 UTC m=+1.149095506" Apr 30 00:01:42.452922 sudo[1723]: pam_unix(sudo:session): session closed for user root Apr 30 00:01:42.454201 sshd[1722]: Connection closed by 10.0.0.1 port 57626 Apr 30 00:01:42.454617 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Apr 30 00:01:42.458180 systemd[1]: sshd@4-10.0.0.70:22-10.0.0.1:57626.service: Deactivated successfully. Apr 30 00:01:42.460202 systemd-logind[1550]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:01:42.460818 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:01:42.461684 systemd-logind[1550]: Removed session 5. Apr 30 00:01:43.147944 kubelet[2740]: E0430 00:01:43.147845 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:43.147944 kubelet[2740]: E0430 00:01:43.147884 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:44.148878 kubelet[2740]: E0430 00:01:44.148844 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:48.588902 kubelet[2740]: E0430 00:01:48.588871 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:49.156653 kubelet[2740]: E0430 00:01:49.156608 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:50.288515 kubelet[2740]: E0430 00:01:50.288483 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:51.160228 kubelet[2740]: E0430 00:01:51.159846 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:51.741038 kubelet[2740]: E0430 00:01:51.741003 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:52.165034 kubelet[2740]: E0430 00:01:52.164798 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:54.515272 update_engine[1553]: I20250430 00:01:54.514776 1553 update_attempter.cc:509] Updating boot flags... Apr 30 00:01:54.542784 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (2811) Apr 30 00:01:54.571020 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (2812) Apr 30 00:01:57.711257 kubelet[2740]: I0430 00:01:57.711200 2740 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:01:57.711732 containerd[1561]: time="2025-04-30T00:01:57.711681222Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:01:57.711976 kubelet[2740]: I0430 00:01:57.711934 2740 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:01:58.618164 kubelet[2740]: I0430 00:01:58.618041 2740 topology_manager.go:215] "Topology Admit Handler" podUID="b031bf76-b7da-421a-b191-aa49dc07c112" podNamespace="kube-system" podName="kube-proxy-fhjh5" Apr 30 00:01:58.624649 kubelet[2740]: I0430 00:01:58.620426 2740 topology_manager.go:215] "Topology Admit Handler" podUID="b160d92f-390c-407a-92a5-f43960aae645" podNamespace="kube-flannel" podName="kube-flannel-ds-sd7x4" Apr 30 00:01:58.712311 kubelet[2740]: I0430 00:01:58.712258 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b031bf76-b7da-421a-b191-aa49dc07c112-lib-modules\") pod \"kube-proxy-fhjh5\" (UID: \"b031bf76-b7da-421a-b191-aa49dc07c112\") " pod="kube-system/kube-proxy-fhjh5" Apr 30 00:01:58.712311 kubelet[2740]: I0430 00:01:58.712310 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b160d92f-390c-407a-92a5-f43960aae645-run\") pod \"kube-flannel-ds-sd7x4\" (UID: \"b160d92f-390c-407a-92a5-f43960aae645\") " pod="kube-flannel/kube-flannel-ds-sd7x4" Apr 30 00:01:58.712708 kubelet[2740]: I0430 00:01:58.712332 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zpp6\" (UniqueName: \"kubernetes.io/projected/b160d92f-390c-407a-92a5-f43960aae645-kube-api-access-9zpp6\") pod \"kube-flannel-ds-sd7x4\" (UID: \"b160d92f-390c-407a-92a5-f43960aae645\") " pod="kube-flannel/kube-flannel-ds-sd7x4" Apr 30 00:01:58.712708 kubelet[2740]: I0430 00:01:58.712398 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b031bf76-b7da-421a-b191-aa49dc07c112-kube-proxy\") pod \"kube-proxy-fhjh5\" (UID: \"b031bf76-b7da-421a-b191-aa49dc07c112\") " pod="kube-system/kube-proxy-fhjh5" Apr 30 00:01:58.712708 kubelet[2740]: I0430 00:01:58.712444 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b031bf76-b7da-421a-b191-aa49dc07c112-xtables-lock\") pod \"kube-proxy-fhjh5\" (UID: \"b031bf76-b7da-421a-b191-aa49dc07c112\") " pod="kube-system/kube-proxy-fhjh5" Apr 30 00:01:58.712708 kubelet[2740]: I0430 00:01:58.712469 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/b160d92f-390c-407a-92a5-f43960aae645-cni\") pod \"kube-flannel-ds-sd7x4\" (UID: \"b160d92f-390c-407a-92a5-f43960aae645\") " pod="kube-flannel/kube-flannel-ds-sd7x4" Apr 30 00:01:58.712708 kubelet[2740]: I0430 00:01:58.712491 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/b160d92f-390c-407a-92a5-f43960aae645-flannel-cfg\") pod \"kube-flannel-ds-sd7x4\" (UID: \"b160d92f-390c-407a-92a5-f43960aae645\") " pod="kube-flannel/kube-flannel-ds-sd7x4" Apr 30 00:01:58.712875 kubelet[2740]: I0430 00:01:58.712510 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64vl8\" (UniqueName: \"kubernetes.io/projected/b031bf76-b7da-421a-b191-aa49dc07c112-kube-api-access-64vl8\") pod \"kube-proxy-fhjh5\" (UID: \"b031bf76-b7da-421a-b191-aa49dc07c112\") " pod="kube-system/kube-proxy-fhjh5" Apr 30 00:01:58.712875 kubelet[2740]: I0430 00:01:58.712526 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/b160d92f-390c-407a-92a5-f43960aae645-cni-plugin\") pod \"kube-flannel-ds-sd7x4\" (UID: \"b160d92f-390c-407a-92a5-f43960aae645\") " pod="kube-flannel/kube-flannel-ds-sd7x4" Apr 30 00:01:58.712875 kubelet[2740]: I0430 00:01:58.712545 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b160d92f-390c-407a-92a5-f43960aae645-xtables-lock\") pod \"kube-flannel-ds-sd7x4\" (UID: \"b160d92f-390c-407a-92a5-f43960aae645\") " pod="kube-flannel/kube-flannel-ds-sd7x4" Apr 30 00:01:58.930982 kubelet[2740]: E0430 00:01:58.930575 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:58.930982 kubelet[2740]: E0430 00:01:58.930602 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:58.932058 containerd[1561]: time="2025-04-30T00:01:58.931706010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fhjh5,Uid:b031bf76-b7da-421a-b191-aa49dc07c112,Namespace:kube-system,Attempt:0,}" Apr 30 00:01:58.932058 containerd[1561]: time="2025-04-30T00:01:58.931728970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-sd7x4,Uid:b160d92f-390c-407a-92a5-f43960aae645,Namespace:kube-flannel,Attempt:0,}" Apr 30 00:01:58.962423 containerd[1561]: time="2025-04-30T00:01:58.962321015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:01:58.962423 containerd[1561]: time="2025-04-30T00:01:58.962321055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:01:58.962423 containerd[1561]: time="2025-04-30T00:01:58.962382696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:01:58.962423 containerd[1561]: time="2025-04-30T00:01:58.962406456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:01:58.962679 containerd[1561]: time="2025-04-30T00:01:58.962545576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:01:58.962805 containerd[1561]: time="2025-04-30T00:01:58.962677656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:01:58.962805 containerd[1561]: time="2025-04-30T00:01:58.962703576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:01:58.967626 containerd[1561]: time="2025-04-30T00:01:58.967545623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:01:59.013276 containerd[1561]: time="2025-04-30T00:01:59.013159529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-sd7x4,Uid:b160d92f-390c-407a-92a5-f43960aae645,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"8784b9f2848c1c71b379ba8a0653d4cd289ca7119bd19681d9174caf7d5e2af3\"" Apr 30 00:01:59.016807 kubelet[2740]: E0430 00:01:59.015579 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:59.016979 containerd[1561]: time="2025-04-30T00:01:59.016933775Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Apr 30 00:01:59.025664 containerd[1561]: time="2025-04-30T00:01:59.025581947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fhjh5,Uid:b031bf76-b7da-421a-b191-aa49dc07c112,Namespace:kube-system,Attempt:0,} returns sandbox id \"155f162b650f22c20c548d7af15ed6a71b3821923ea7ebb13b2e16f4625bc1f5\"" Apr 30 00:01:59.026440 kubelet[2740]: E0430 00:01:59.026339 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:01:59.030558 containerd[1561]: time="2025-04-30T00:01:59.028937351Z" level=info msg="CreateContainer within sandbox \"155f162b650f22c20c548d7af15ed6a71b3821923ea7ebb13b2e16f4625bc1f5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:01:59.066626 containerd[1561]: time="2025-04-30T00:01:59.066565403Z" level=info msg="CreateContainer within sandbox \"155f162b650f22c20c548d7af15ed6a71b3821923ea7ebb13b2e16f4625bc1f5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a8186877aa9ce2f1cf35c94d1872d97ddea791090b303da6933362c5ab64d281\"" Apr 30 00:01:59.067204 containerd[1561]: time="2025-04-30T00:01:59.067179604Z" level=info msg="StartContainer for \"a8186877aa9ce2f1cf35c94d1872d97ddea791090b303da6933362c5ab64d281\"" Apr 30 00:01:59.130097 containerd[1561]: time="2025-04-30T00:01:59.129954011Z" level=info msg="StartContainer for \"a8186877aa9ce2f1cf35c94d1872d97ddea791090b303da6933362c5ab64d281\" returns successfully" Apr 30 00:01:59.179361 kubelet[2740]: E0430 00:01:59.179299 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:02:00.150253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3904864482.mount: Deactivated successfully. Apr 30 00:02:00.176951 containerd[1561]: time="2025-04-30T00:02:00.176902644Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:02:00.177866 containerd[1561]: time="2025-04-30T00:02:00.177721405Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Apr 30 00:02:00.178557 containerd[1561]: time="2025-04-30T00:02:00.178522446Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:02:00.182006 containerd[1561]: time="2025-04-30T00:02:00.181852850Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:02:00.182853 containerd[1561]: time="2025-04-30T00:02:00.182815291Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.165811676s" Apr 30 00:02:00.183085 containerd[1561]: time="2025-04-30T00:02:00.183065051Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Apr 30 00:02:00.185230 containerd[1561]: time="2025-04-30T00:02:00.185197454Z" level=info msg="CreateContainer within sandbox \"8784b9f2848c1c71b379ba8a0653d4cd289ca7119bd19681d9174caf7d5e2af3\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Apr 30 00:02:00.199589 containerd[1561]: time="2025-04-30T00:02:00.199417033Z" level=info msg="CreateContainer within sandbox \"8784b9f2848c1c71b379ba8a0653d4cd289ca7119bd19681d9174caf7d5e2af3\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"769f9a5a8c4f38adbd4b6fc50526a6c1ceafbeed79e7a77733a3feea01f51d4c\"" Apr 30 00:02:00.200075 containerd[1561]: time="2025-04-30T00:02:00.199982393Z" level=info msg="StartContainer for \"769f9a5a8c4f38adbd4b6fc50526a6c1ceafbeed79e7a77733a3feea01f51d4c\"" Apr 30 00:02:00.250018 containerd[1561]: time="2025-04-30T00:02:00.249975698Z" level=info msg="StartContainer for \"769f9a5a8c4f38adbd4b6fc50526a6c1ceafbeed79e7a77733a3feea01f51d4c\" returns successfully" Apr 30 00:02:00.294205 containerd[1561]: time="2025-04-30T00:02:00.289543910Z" level=info msg="shim disconnected" id=769f9a5a8c4f38adbd4b6fc50526a6c1ceafbeed79e7a77733a3feea01f51d4c namespace=k8s.io Apr 30 00:02:00.294205 containerd[1561]: time="2025-04-30T00:02:00.294127115Z" level=warning msg="cleaning up after shim disconnected" id=769f9a5a8c4f38adbd4b6fc50526a6c1ceafbeed79e7a77733a3feea01f51d4c namespace=k8s.io Apr 30 00:02:00.294205 containerd[1561]: time="2025-04-30T00:02:00.294142436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:02:01.155476 kubelet[2740]: I0430 00:02:01.155415 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fhjh5" podStartSLOduration=3.1553981 podStartE2EDuration="3.1553981s" podCreationTimestamp="2025-04-30 00:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:01:59.193019018 +0000 UTC m=+18.161419267" watchObservedRunningTime="2025-04-30 00:02:01.1553981 +0000 UTC m=+20.123798389" Apr 30 00:02:01.186691 kubelet[2740]: E0430 00:02:01.186637 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:02:01.188302 containerd[1561]: time="2025-04-30T00:02:01.188253659Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Apr 30 00:02:02.430083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount570073997.mount: Deactivated successfully. Apr 30 00:02:02.948016 containerd[1561]: time="2025-04-30T00:02:02.947968847Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:02:02.949219 containerd[1561]: time="2025-04-30T00:02:02.949163448Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874260" Apr 30 00:02:02.950802 containerd[1561]: time="2025-04-30T00:02:02.950303849Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:02:02.953435 containerd[1561]: time="2025-04-30T00:02:02.952858292Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:02:02.954213 containerd[1561]: time="2025-04-30T00:02:02.954187174Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.765357754s" Apr 30 00:02:02.954273 containerd[1561]: time="2025-04-30T00:02:02.954217774Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Apr 30 00:02:02.962830 containerd[1561]: time="2025-04-30T00:02:02.962787623Z" level=info msg="CreateContainer within sandbox \"8784b9f2848c1c71b379ba8a0653d4cd289ca7119bd19681d9174caf7d5e2af3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 00:02:02.972533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3461935126.mount: Deactivated successfully. Apr 30 00:02:02.972747 containerd[1561]: time="2025-04-30T00:02:02.972515954Z" level=info msg="CreateContainer within sandbox \"8784b9f2848c1c71b379ba8a0653d4cd289ca7119bd19681d9174caf7d5e2af3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5166b2319cfa6bfa95cecdbc361895154dac7497b45fc36d3c7e58b74a85b50e\"" Apr 30 00:02:02.976565 containerd[1561]: time="2025-04-30T00:02:02.976287039Z" level=info msg="StartContainer for \"5166b2319cfa6bfa95cecdbc361895154dac7497b45fc36d3c7e58b74a85b50e\"" Apr 30 00:02:03.028777 containerd[1561]: time="2025-04-30T00:02:03.028635497Z" level=info msg="StartContainer for \"5166b2319cfa6bfa95cecdbc361895154dac7497b45fc36d3c7e58b74a85b50e\" returns successfully" Apr 30 00:02:03.065534 kubelet[2740]: I0430 00:02:03.064731 2740 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 00:02:03.130848 containerd[1561]: time="2025-04-30T00:02:03.130782526Z" level=info msg="shim disconnected" id=5166b2319cfa6bfa95cecdbc361895154dac7497b45fc36d3c7e58b74a85b50e namespace=k8s.io Apr 30 00:02:03.130848 containerd[1561]: time="2025-04-30T00:02:03.130840246Z" level=warning msg="cleaning up after shim disconnected" id=5166b2319cfa6bfa95cecdbc361895154dac7497b45fc36d3c7e58b74a85b50e namespace=k8s.io Apr 30 00:02:03.130848 containerd[1561]: time="2025-04-30T00:02:03.130848606Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:02:03.146567 kubelet[2740]: I0430 00:02:03.146518 2740 topology_manager.go:215] "Topology Admit Handler" podUID="f43e6070-7af6-468d-9495-e7b0f29cf3db" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2hkjf" Apr 30 00:02:03.147143 kubelet[2740]: I0430 00:02:03.147119 2740 topology_manager.go:215] "Topology Admit Handler" podUID="443df136-6040-48fe-b96d-ede6d423f94f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dzh29" Apr 30 00:02:03.191656 kubelet[2740]: E0430 00:02:03.191606 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:02:03.194852 containerd[1561]: time="2025-04-30T00:02:03.194764474Z" level=info msg="CreateContainer within sandbox \"8784b9f2848c1c71b379ba8a0653d4cd289ca7119bd19681d9174caf7d5e2af3\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Apr 30 00:02:03.205851 containerd[1561]: time="2025-04-30T00:02:03.205714286Z" level=info msg="CreateContainer within sandbox \"8784b9f2848c1c71b379ba8a0653d4cd289ca7119bd19681d9174caf7d5e2af3\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"635ca80eb71b15403a65cb103327e3e57eb176c2cf149d51b58f88846a02bfca\"" Apr 30 00:02:03.208412 containerd[1561]: time="2025-04-30T00:02:03.208356928Z" level=info msg="StartContainer for \"635ca80eb71b15403a65cb103327e3e57eb176c2cf149d51b58f88846a02bfca\"" Apr 30 00:02:03.242116 kubelet[2740]: I0430 00:02:03.242072 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f43e6070-7af6-468d-9495-e7b0f29cf3db-config-volume\") pod \"coredns-7db6d8ff4d-2hkjf\" (UID: \"f43e6070-7af6-468d-9495-e7b0f29cf3db\") " pod="kube-system/coredns-7db6d8ff4d-2hkjf" Apr 30 00:02:03.242116 kubelet[2740]: I0430 00:02:03.242117 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/443df136-6040-48fe-b96d-ede6d423f94f-config-volume\") pod \"coredns-7db6d8ff4d-dzh29\" (UID: \"443df136-6040-48fe-b96d-ede6d423f94f\") " pod="kube-system/coredns-7db6d8ff4d-dzh29" Apr 30 00:02:03.242368 kubelet[2740]: I0430 00:02:03.242137 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k5fv\" (UniqueName: \"kubernetes.io/projected/443df136-6040-48fe-b96d-ede6d423f94f-kube-api-access-6k5fv\") pod \"coredns-7db6d8ff4d-dzh29\" (UID: \"443df136-6040-48fe-b96d-ede6d423f94f\") " pod="kube-system/coredns-7db6d8ff4d-dzh29" Apr 30 00:02:03.242368 kubelet[2740]: I0430 00:02:03.242158 2740 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99xzk\" (UniqueName: \"kubernetes.io/projected/f43e6070-7af6-468d-9495-e7b0f29cf3db-kube-api-access-99xzk\") pod \"coredns-7db6d8ff4d-2hkjf\" (UID: \"f43e6070-7af6-468d-9495-e7b0f29cf3db\") " pod="kube-system/coredns-7db6d8ff4d-2hkjf" Apr 30 00:02:03.262288 containerd[1561]: time="2025-04-30T00:02:03.262239746Z" level=info msg="StartContainer for \"635ca80eb71b15403a65cb103327e3e57eb176c2cf149d51b58f88846a02bfca\" returns successfully" Apr 30 00:02:03.366491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5166b2319cfa6bfa95cecdbc361895154dac7497b45fc36d3c7e58b74a85b50e-rootfs.mount: Deactivated successfully. Apr 30 00:02:03.453000 kubelet[2740]: E0430 00:02:03.452964 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:02:03.453350 kubelet[2740]: E0430 00:02:03.453087 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:02:03.453621 containerd[1561]: time="2025-04-30T00:02:03.453585630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hkjf,Uid:f43e6070-7af6-468d-9495-e7b0f29cf3db,Namespace:kube-system,Attempt:0,}" Apr 30 00:02:03.454526 containerd[1561]: time="2025-04-30T00:02:03.454330711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dzh29,Uid:443df136-6040-48fe-b96d-ede6d423f94f,Namespace:kube-system,Attempt:0,}" Apr 30 00:02:03.585297 containerd[1561]: time="2025-04-30T00:02:03.585063291Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hkjf,Uid:f43e6070-7af6-468d-9495-e7b0f29cf3db,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"77ada44878ab14b2ed4f3f33100cbb8fc1d9523478abaf02a19066e07c04639d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 30 00:02:03.585992 kubelet[2740]: E0430 00:02:03.585534 2740 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77ada44878ab14b2ed4f3f33100cbb8fc1d9523478abaf02a19066e07c04639d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 30 00:02:03.585992 kubelet[2740]: E0430 00:02:03.585609 2740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77ada44878ab14b2ed4f3f33100cbb8fc1d9523478abaf02a19066e07c04639d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-2hkjf" Apr 30 00:02:03.585992 kubelet[2740]: E0430 00:02:03.585630 2740 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77ada44878ab14b2ed4f3f33100cbb8fc1d9523478abaf02a19066e07c04639d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-2hkjf" Apr 30 00:02:03.585992 kubelet[2740]: E0430 00:02:03.585687 2740 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2hkjf_kube-system(f43e6070-7af6-468d-9495-e7b0f29cf3db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2hkjf_kube-system(f43e6070-7af6-468d-9495-e7b0f29cf3db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77ada44878ab14b2ed4f3f33100cbb8fc1d9523478abaf02a19066e07c04639d\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-2hkjf" podUID="f43e6070-7af6-468d-9495-e7b0f29cf3db" Apr 30 00:02:03.586639 containerd[1561]: time="2025-04-30T00:02:03.586579172Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dzh29,Uid:443df136-6040-48fe-b96d-ede6d423f94f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"564a62dfb692014d8bdf076b1a33e1761c29261f8c8fae879a34aaeb427edb9b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 30 00:02:03.586809 kubelet[2740]: E0430 00:02:03.586782 2740 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"564a62dfb692014d8bdf076b1a33e1761c29261f8c8fae879a34aaeb427edb9b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 30 00:02:03.586863 kubelet[2740]: E0430 00:02:03.586822 2740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"564a62dfb692014d8bdf076b1a33e1761c29261f8c8fae879a34aaeb427edb9b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-dzh29" Apr 30 00:02:03.586863 kubelet[2740]: E0430 00:02:03.586840 2740 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"564a62dfb692014d8bdf076b1a33e1761c29261f8c8fae879a34aaeb427edb9b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-dzh29" Apr 30 00:02:03.586922 kubelet[2740]: E0430 00:02:03.586876 2740 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-dzh29_kube-system(443df136-6040-48fe-b96d-ede6d423f94f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-dzh29_kube-system(443df136-6040-48fe-b96d-ede6d423f94f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"564a62dfb692014d8bdf076b1a33e1761c29261f8c8fae879a34aaeb427edb9b\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-dzh29" podUID="443df136-6040-48fe-b96d-ede6d423f94f" Apr 30 00:02:04.194532 kubelet[2740]: E0430 00:02:04.194228 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:02:04.206271 kubelet[2740]: I0430 00:02:04.206027 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-sd7x4" podStartSLOduration=2.265912419 podStartE2EDuration="6.206009301s" podCreationTimestamp="2025-04-30 00:01:58 +0000 UTC" firstStartedPulling="2025-04-30 00:01:59.016413214 +0000 UTC m=+17.984813503" lastFinishedPulling="2025-04-30 00:02:02.956510136 +0000 UTC m=+21.924910385" observedRunningTime="2025-04-30 00:02:04.203808938 +0000 UTC m=+23.172209227" watchObservedRunningTime="2025-04-30 00:02:04.206009301 +0000 UTC m=+23.174409590" Apr 30 00:02:04.364148 systemd[1]: run-netns-cni\x2d90e7a813\x2de5d0\x2d8cef\x2d7284\x2d2c42bf9c4ae4.mount: Deactivated successfully. Apr 30 00:02:04.364488 systemd[1]: run-netns-cni\x2dbf53e88c\x2df1cf\x2d59ce\x2dd196\x2d98ed613169ef.mount: Deactivated successfully. Apr 30 00:02:04.364960 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-564a62dfb692014d8bdf076b1a33e1761c29261f8c8fae879a34aaeb427edb9b-shm.mount: Deactivated successfully. Apr 30 00:02:04.365192 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77ada44878ab14b2ed4f3f33100cbb8fc1d9523478abaf02a19066e07c04639d-shm.mount: Deactivated successfully. Apr 30 00:02:04.367878 systemd-networkd[1233]: flannel.1: Link UP Apr 30 00:02:04.367884 systemd-networkd[1233]: flannel.1: Gained carrier Apr 30 00:02:05.196109 kubelet[2740]: E0430 00:02:05.196073 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:02:05.982725 systemd-networkd[1233]: flannel.1: Gained IPv6LL Apr 30 00:02:06.265052 systemd[1]: Started sshd@5-10.0.0.70:22-10.0.0.1:35636.service - OpenSSH per-connection server daemon (10.0.0.1:35636). Apr 30 00:02:06.306228 sshd[3390]: Accepted publickey for core from 10.0.0.1 port 35636 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:02:06.307983 sshd-session[3390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:02:06.312731 systemd-logind[1550]: New session 6 of user core. Apr 30 00:02:06.326123 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:02:06.447983 sshd[3393]: Connection closed by 10.0.0.1 port 35636 Apr 30 00:02:06.448591 sshd-session[3390]: pam_unix(sshd:session): session closed for user core Apr 30 00:02:06.452387 systemd[1]: sshd@5-10.0.0.70:22-10.0.0.1:35636.service: Deactivated successfully. Apr 30 00:02:06.455067 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:02:06.455145 systemd-logind[1550]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:02:06.456316 systemd-logind[1550]: Removed session 6. Apr 30 00:02:11.460066 systemd[1]: Started sshd@6-10.0.0.70:22-10.0.0.1:35642.service - OpenSSH per-connection server daemon (10.0.0.1:35642). Apr 30 00:02:11.496789 sshd[3428]: Accepted publickey for core from 10.0.0.1 port 35642 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:02:11.498162 sshd-session[3428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:02:11.503600 systemd-logind[1550]: New session 7 of user core. Apr 30 00:02:11.518176 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:02:11.646596 sshd[3431]: Connection closed by 10.0.0.1 port 35642 Apr 30 00:02:11.646546 sshd-session[3428]: pam_unix(sshd:session): session closed for user core Apr 30 00:02:11.650553 systemd[1]: sshd@6-10.0.0.70:22-10.0.0.1:35642.service: Deactivated successfully. Apr 30 00:02:11.653018 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:02:11.653810 systemd-logind[1550]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:02:11.654719 systemd-logind[1550]: Removed session 7. Apr 30 00:02:16.133679 kubelet[2740]: E0430 00:02:16.133643 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:02:16.134854 containerd[1561]: time="2025-04-30T00:02:16.134450315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dzh29,Uid:443df136-6040-48fe-b96d-ede6d423f94f,Namespace:kube-system,Attempt:0,}" Apr 30 00:02:16.174003 systemd-networkd[1233]: cni0: Link UP Apr 30 00:02:16.174010 systemd-networkd[1233]: cni0: Gained carrier Apr 30 00:02:16.183363 systemd-networkd[1233]: cni0: Lost carrier Apr 30 00:02:16.183585 systemd-networkd[1233]: vethc7a35742: Link UP Apr 30 00:02:16.186938 kernel: cni0: port 1(vethc7a35742) entered blocking state Apr 30 00:02:16.187209 kernel: cni0: port 1(vethc7a35742) entered disabled state Apr 30 00:02:16.187248 kernel: vethc7a35742: entered allmulticast mode Apr 30 00:02:16.187268 kernel: vethc7a35742: entered promiscuous mode Apr 30 00:02:16.204144 kernel: cni0: port 1(vethc7a35742) entered blocking state Apr 30 00:02:16.204319 kernel: cni0: port 1(vethc7a35742) entered forwarding state Apr 30 00:02:16.204532 systemd-networkd[1233]: vethc7a35742: Gained carrier Apr 30 00:02:16.205246 systemd-networkd[1233]: cni0: Gained carrier Apr 30 00:02:16.206987 containerd[1561]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001e938), "name":"cbr0", "type":"bridge"} Apr 30 00:02:16.206987 containerd[1561]: delegateAdd: netconf sent to delegate plugin: Apr 30 00:02:16.230271 containerd[1561]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-04-30T00:02:16.229611479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:02:16.230271 containerd[1561]: time="2025-04-30T00:02:16.229683959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:02:16.230271 containerd[1561]: time="2025-04-30T00:02:16.229699039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:02:16.230271 containerd[1561]: time="2025-04-30T00:02:16.229832319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:02:16.252957 systemd-resolved[1435]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:02:16.271596 containerd[1561]: time="2025-04-30T00:02:16.271557538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dzh29,Uid:443df136-6040-48fe-b96d-ede6d423f94f,Namespace:kube-system,Attempt:0,} returns sandbox id \"32dd912245661301dcccb1c6625e581524e030413bb767e539b4bf60c0beb0f8\"" Apr 30 00:02:16.272551 kubelet[2740]: E0430 00:02:16.272527 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:02:16.274525 containerd[1561]: time="2025-04-30T00:02:16.274491900Z" level=info msg="CreateContainer within sandbox \"32dd912245661301dcccb1c6625e581524e030413bb767e539b4bf60c0beb0f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:02:16.291496 containerd[1561]: time="2025-04-30T00:02:16.291390668Z" level=info msg="CreateContainer within sandbox \"32dd912245661301dcccb1c6625e581524e030413bb767e539b4bf60c0beb0f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"85f303da8cc16371dcce49e612fec55cd7c64504f1df62d033c54229a5d8d836\"" Apr 30 00:02:16.292165 containerd[1561]: time="2025-04-30T00:02:16.292140668Z" level=info msg="StartContainer for \"85f303da8cc16371dcce49e612fec55cd7c64504f1df62d033c54229a5d8d836\"" Apr 30 00:02:16.339863 containerd[1561]: time="2025-04-30T00:02:16.339813010Z" level=info msg="StartContainer for \"85f303da8cc16371dcce49e612fec55cd7c64504f1df62d033c54229a5d8d836\" returns successfully" Apr 30 00:02:16.667047 systemd[1]: Started sshd@7-10.0.0.70:22-10.0.0.1:33744.service - OpenSSH per-connection server daemon (10.0.0.1:33744). Apr 30 00:02:16.705023 sshd[3583]: Accepted publickey for core from 10.0.0.1 port 33744 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:02:16.706502 sshd-session[3583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:02:16.713474 systemd-logind[1550]: New session 8 of user core. Apr 30 00:02:16.720041 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:02:16.851090 sshd[3586]: Connection closed by 10.0.0.1 port 33744 Apr 30 00:02:16.853238 sshd-session[3583]: pam_unix(sshd:session): session closed for user core Apr 30 00:02:16.862040 systemd[1]: Started sshd@8-10.0.0.70:22-10.0.0.1:33752.service - OpenSSH per-connection server daemon (10.0.0.1:33752). Apr 30 00:02:16.862464 systemd[1]: sshd@7-10.0.0.70:22-10.0.0.1:33744.service: Deactivated successfully. Apr 30 00:02:16.865654 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:02:16.868744 systemd-logind[1550]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:02:16.870014 systemd-logind[1550]: Removed session 8. Apr 30 00:02:16.904280 sshd[3597]: Accepted publickey for core from 10.0.0.1 port 33752 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:02:16.905643 sshd-session[3597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:02:16.909926 systemd-logind[1550]: New session 9 of user core. Apr 30 00:02:16.929424 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:02:17.084957 sshd[3603]: Connection closed by 10.0.0.1 port 33752 Apr 30 00:02:17.086891 sshd-session[3597]: pam_unix(sshd:session): session closed for user core Apr 30 00:02:17.096116 systemd[1]: Started sshd@9-10.0.0.70:22-10.0.0.1:33760.service - OpenSSH per-connection server daemon (10.0.0.1:33760). Apr 30 00:02:17.096611 systemd[1]: sshd@8-10.0.0.70:22-10.0.0.1:33752.service: Deactivated successfully. Apr 30 00:02:17.103933 systemd-logind[1550]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:02:17.104094 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:02:17.110935 systemd-logind[1550]: Removed session 9. Apr 30 00:02:17.133663 kubelet[2740]: E0430 00:02:17.133295 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:02:17.134076 containerd[1561]: time="2025-04-30T00:02:17.133945933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hkjf,Uid:f43e6070-7af6-468d-9495-e7b0f29cf3db,Namespace:kube-system,Attempt:0,}" Apr 30 00:02:17.148100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4030009809.mount: Deactivated successfully. Apr 30 00:02:17.153642 sshd[3611]: Accepted publickey for core from 10.0.0.1 port 33760 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:02:17.154560 sshd-session[3611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:02:17.160350 systemd-logind[1550]: New session 10 of user core. Apr 30 00:02:17.160986 systemd-networkd[1233]: vetha86558c1: Link UP Apr 30 00:02:17.163012 kernel: cni0: port 2(vetha86558c1) entered blocking state Apr 30 00:02:17.163838 kernel: cni0: port 2(vetha86558c1) entered disabled state Apr 30 00:02:17.163883 kernel: vetha86558c1: entered allmulticast mode Apr 30 00:02:17.165807 kernel: vetha86558c1: entered promiscuous mode Apr 30 00:02:17.165884 kernel: cni0: port 2(vetha86558c1) entered blocking state Apr 30 00:02:17.165901 kernel: cni0: port 2(vetha86558c1) entered forwarding state Apr 30 00:02:17.168114 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:02:17.172235 systemd-networkd[1233]: vetha86558c1: Gained carrier Apr 30 00:02:17.176919 containerd[1561]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} Apr 30 00:02:17.176919 containerd[1561]: delegateAdd: netconf sent to delegate plugin: Apr 30 00:02:17.194690 containerd[1561]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-04-30T00:02:17.194383599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:02:17.194690 containerd[1561]: time="2025-04-30T00:02:17.194453039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:02:17.194690 containerd[1561]: time="2025-04-30T00:02:17.194485159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:02:17.194690 containerd[1561]: time="2025-04-30T00:02:17.194599199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:02:17.216953 kubelet[2740]: E0430 00:02:17.216906 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:02:17.223921 systemd-resolved[1435]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:02:17.240353 kubelet[2740]: I0430 00:02:17.240118 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dzh29" podStartSLOduration=19.240101259 podStartE2EDuration="19.240101259s" podCreationTimestamp="2025-04-30 00:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:02:17.237570258 +0000 UTC m=+36.205970547" watchObservedRunningTime="2025-04-30 00:02:17.240101259 +0000 UTC m=+36.208501508" Apr 30 00:02:17.269036 containerd[1561]: time="2025-04-30T00:02:17.268811751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hkjf,Uid:f43e6070-7af6-468d-9495-e7b0f29cf3db,Namespace:kube-system,Attempt:0,} returns sandbox id \"9475cea6fdaaa2bb1f0e782bf415724c63d314421e153d7c7d4297a4e1a8da83\"" Apr 30 00:02:17.269987 kubelet[2740]: E0430 00:02:17.269960 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:02:17.272806 containerd[1561]: time="2025-04-30T00:02:17.272705513Z" level=info msg="CreateContainer within sandbox \"9475cea6fdaaa2bb1f0e782bf415724c63d314421e153d7c7d4297a4e1a8da83\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:02:17.296932 containerd[1561]: time="2025-04-30T00:02:17.296882843Z" level=info msg="CreateContainer within sandbox \"9475cea6fdaaa2bb1f0e782bf415724c63d314421e153d7c7d4297a4e1a8da83\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0f25adaa5cbe945b7fe9855548a7af0a18943d35ef25f0dd7fd31eae09832eee\"" Apr 30 00:02:17.298112 containerd[1561]: time="2025-04-30T00:02:17.297853644Z" level=info msg="StartContainer for \"0f25adaa5cbe945b7fe9855548a7af0a18943d35ef25f0dd7fd31eae09832eee\"" Apr 30 00:02:17.310187 systemd-networkd[1233]: cni0: Gained IPv6LL Apr 30 00:02:17.344839 containerd[1561]: time="2025-04-30T00:02:17.344697104Z" level=info msg="StartContainer for \"0f25adaa5cbe945b7fe9855548a7af0a18943d35ef25f0dd7fd31eae09832eee\" returns successfully" Apr 30 00:02:17.370154 sshd[3642]: Connection closed by 10.0.0.1 port 33760 Apr 30 00:02:17.368245 sshd-session[3611]: pam_unix(sshd:session): session closed for user core Apr 30 00:02:17.371721 systemd-logind[1550]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:02:17.373347 systemd[1]: sshd@9-10.0.0.70:22-10.0.0.1:33760.service: Deactivated successfully. Apr 30 00:02:17.375405 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:02:17.376800 systemd-logind[1550]: Removed session 10. Apr 30 00:02:17.949991 systemd-networkd[1233]: vethc7a35742: Gained IPv6LL Apr 30 00:02:18.149208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2455410661.mount: Deactivated successfully. Apr 30 00:02:18.221455 kubelet[2740]: E0430 00:02:18.220999 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:02:18.223117 kubelet[2740]: E0430 00:02:18.222983 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:02:18.249623 kubelet[2740]: I0430 00:02:18.249290 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2hkjf" podStartSLOduration=20.249273409 podStartE2EDuration="20.249273409s" podCreationTimestamp="2025-04-30 00:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:02:18.234689683 +0000 UTC m=+37.203089972" watchObservedRunningTime="2025-04-30 00:02:18.249273409 +0000 UTC m=+37.217673698" Apr 30 00:02:18.589901 systemd-networkd[1233]: vetha86558c1: Gained IPv6LL Apr 30 00:02:19.222455 kubelet[2740]: E0430 00:02:19.222429 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:02:19.222986 kubelet[2740]: E0430 00:02:19.222467 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:02:22.376013 systemd[1]: Started sshd@10-10.0.0.70:22-10.0.0.1:33774.service - OpenSSH per-connection server daemon (10.0.0.1:33774). Apr 30 00:02:22.411639 sshd[3768]: Accepted publickey for core from 10.0.0.1 port 33774 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:02:22.412887 sshd-session[3768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:02:22.416621 systemd-logind[1550]: New session 11 of user core. Apr 30 00:02:22.429132 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:02:22.544958 sshd[3771]: Connection closed by 10.0.0.1 port 33774 Apr 30 00:02:22.545541 sshd-session[3768]: pam_unix(sshd:session): session closed for user core Apr 30 00:02:22.548924 systemd[1]: sshd@10-10.0.0.70:22-10.0.0.1:33774.service: Deactivated successfully. Apr 30 00:02:22.551104 systemd-logind[1550]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:02:22.551707 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:02:22.552913 systemd-logind[1550]: Removed session 11. Apr 30 00:02:27.563276 systemd[1]: Started sshd@11-10.0.0.70:22-10.0.0.1:60310.service - OpenSSH per-connection server daemon (10.0.0.1:60310). Apr 30 00:02:27.609922 sshd[3804]: Accepted publickey for core from 10.0.0.1 port 60310 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:02:27.612049 sshd-session[3804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:02:27.616423 systemd-logind[1550]: New session 12 of user core. Apr 30 00:02:27.631122 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:02:27.746578 sshd[3807]: Connection closed by 10.0.0.1 port 60310 Apr 30 00:02:27.747080 sshd-session[3804]: pam_unix(sshd:session): session closed for user core Apr 30 00:02:27.750308 systemd[1]: sshd@11-10.0.0.70:22-10.0.0.1:60310.service: Deactivated successfully. Apr 30 00:02:27.754474 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:02:27.755120 systemd-logind[1550]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:02:27.756524 systemd-logind[1550]: Removed session 12. Apr 30 00:02:32.764466 systemd[1]: Started sshd@12-10.0.0.70:22-10.0.0.1:53212.service - OpenSSH per-connection server daemon (10.0.0.1:53212). Apr 30 00:02:32.796968 sshd[3842]: Accepted publickey for core from 10.0.0.1 port 53212 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:02:32.798283 sshd-session[3842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:02:32.805336 systemd-logind[1550]: New session 13 of user core. Apr 30 00:02:32.814465 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:02:32.946439 sshd[3845]: Connection closed by 10.0.0.1 port 53212 Apr 30 00:02:32.948680 sshd-session[3842]: pam_unix(sshd:session): session closed for user core Apr 30 00:02:32.956098 systemd[1]: Started sshd@13-10.0.0.70:22-10.0.0.1:53214.service - OpenSSH per-connection server daemon (10.0.0.1:53214). Apr 30 00:02:32.956503 systemd[1]: sshd@12-10.0.0.70:22-10.0.0.1:53212.service: Deactivated successfully. Apr 30 00:02:32.959965 systemd-logind[1550]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:02:32.960078 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:02:32.962415 systemd-logind[1550]: Removed session 13. Apr 30 00:02:32.993769 sshd[3855]: Accepted publickey for core from 10.0.0.1 port 53214 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:02:32.995152 sshd-session[3855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:02:33.000628 systemd-logind[1550]: New session 14 of user core. Apr 30 00:02:33.009091 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:02:33.226557 sshd[3861]: Connection closed by 10.0.0.1 port 53214 Apr 30 00:02:33.226428 sshd-session[3855]: pam_unix(sshd:session): session closed for user core Apr 30 00:02:33.238240 systemd[1]: Started sshd@14-10.0.0.70:22-10.0.0.1:53220.service - OpenSSH per-connection server daemon (10.0.0.1:53220). Apr 30 00:02:33.238824 systemd[1]: sshd@13-10.0.0.70:22-10.0.0.1:53214.service: Deactivated successfully. Apr 30 00:02:33.242645 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:02:33.243078 systemd-logind[1550]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:02:33.245917 systemd-logind[1550]: Removed session 14. Apr 30 00:02:33.278256 sshd[3868]: Accepted publickey for core from 10.0.0.1 port 53220 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:02:33.280242 sshd-session[3868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:02:33.286542 systemd-logind[1550]: New session 15 of user core. Apr 30 00:02:33.298296 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:02:34.703480 sshd[3874]: Connection closed by 10.0.0.1 port 53220 Apr 30 00:02:34.706991 sshd-session[3868]: pam_unix(sshd:session): session closed for user core Apr 30 00:02:34.716218 systemd[1]: Started sshd@15-10.0.0.70:22-10.0.0.1:53232.service - OpenSSH per-connection server daemon (10.0.0.1:53232). Apr 30 00:02:34.718691 systemd[1]: sshd@14-10.0.0.70:22-10.0.0.1:53220.service: Deactivated successfully. Apr 30 00:02:34.727771 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:02:34.732909 systemd-logind[1550]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:02:34.738257 systemd-logind[1550]: Removed session 15. Apr 30 00:02:34.772221 sshd[3911]: Accepted publickey for core from 10.0.0.1 port 53232 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:02:34.773726 sshd-session[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:02:34.778372 systemd-logind[1550]: New session 16 of user core. Apr 30 00:02:34.790170 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:02:35.031696 sshd[3918]: Connection closed by 10.0.0.1 port 53232 Apr 30 00:02:35.036548 sshd-session[3911]: pam_unix(sshd:session): session closed for user core Apr 30 00:02:35.040105 systemd[1]: Started sshd@16-10.0.0.70:22-10.0.0.1:53244.service - OpenSSH per-connection server daemon (10.0.0.1:53244). Apr 30 00:02:35.042138 systemd[1]: sshd@15-10.0.0.70:22-10.0.0.1:53232.service: Deactivated successfully. Apr 30 00:02:35.044742 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:02:35.047583 systemd-logind[1550]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:02:35.051257 systemd-logind[1550]: Removed session 16. Apr 30 00:02:35.086922 sshd[3925]: Accepted publickey for core from 10.0.0.1 port 53244 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:02:35.088435 sshd-session[3925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:02:35.093009 systemd-logind[1550]: New session 17 of user core. Apr 30 00:02:35.103094 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:02:35.225173 sshd[3931]: Connection closed by 10.0.0.1 port 53244 Apr 30 00:02:35.225798 sshd-session[3925]: pam_unix(sshd:session): session closed for user core Apr 30 00:02:35.233063 systemd[1]: sshd@16-10.0.0.70:22-10.0.0.1:53244.service: Deactivated successfully. Apr 30 00:02:35.235611 systemd-logind[1550]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:02:35.235821 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:02:35.236895 systemd-logind[1550]: Removed session 17. Apr 30 00:02:40.237102 systemd[1]: Started sshd@17-10.0.0.70:22-10.0.0.1:53254.service - OpenSSH per-connection server daemon (10.0.0.1:53254). Apr 30 00:02:40.294292 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 53254 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:02:40.295366 sshd-session[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:02:40.306354 systemd-logind[1550]: New session 18 of user core. Apr 30 00:02:40.313289 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:02:40.444682 sshd[3972]: Connection closed by 10.0.0.1 port 53254 Apr 30 00:02:40.444658 sshd-session[3969]: pam_unix(sshd:session): session closed for user core Apr 30 00:02:40.448726 systemd[1]: sshd@17-10.0.0.70:22-10.0.0.1:53254.service: Deactivated successfully. Apr 30 00:02:40.452332 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:02:40.453605 systemd-logind[1550]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:02:40.454942 systemd-logind[1550]: Removed session 18. Apr 30 00:02:45.455071 systemd[1]: Started sshd@18-10.0.0.70:22-10.0.0.1:59978.service - OpenSSH per-connection server daemon (10.0.0.1:59978). Apr 30 00:02:45.494057 sshd[4007]: Accepted publickey for core from 10.0.0.1 port 59978 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:02:45.495453 sshd-session[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:02:45.499584 systemd-logind[1550]: New session 19 of user core. Apr 30 00:02:45.513103 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:02:45.621455 sshd[4010]: Connection closed by 10.0.0.1 port 59978 Apr 30 00:02:45.621844 sshd-session[4007]: pam_unix(sshd:session): session closed for user core Apr 30 00:02:45.624456 systemd[1]: sshd@18-10.0.0.70:22-10.0.0.1:59978.service: Deactivated successfully. Apr 30 00:02:45.627240 systemd-logind[1550]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:02:45.627833 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:02:45.628944 systemd-logind[1550]: Removed session 19. Apr 30 00:02:50.637007 systemd[1]: Started sshd@19-10.0.0.70:22-10.0.0.1:59994.service - OpenSSH per-connection server daemon (10.0.0.1:59994). Apr 30 00:02:50.704846 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 59994 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:02:50.706077 sshd-session[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:02:50.710258 systemd-logind[1550]: New session 20 of user core. Apr 30 00:02:50.719090 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 00:02:50.840857 sshd[4046]: Connection closed by 10.0.0.1 port 59994 Apr 30 00:02:50.841961 sshd-session[4043]: pam_unix(sshd:session): session closed for user core Apr 30 00:02:50.845126 systemd[1]: sshd@19-10.0.0.70:22-10.0.0.1:59994.service: Deactivated successfully. Apr 30 00:02:50.848614 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 00:02:50.850004 systemd-logind[1550]: Session 20 logged out. Waiting for processes to exit. Apr 30 00:02:50.851139 systemd-logind[1550]: Removed session 20.