Feb 13 15:35:54.987552 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:35:54.987574 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 13:57:00 -00 2025 Feb 13 15:35:54.987584 kernel: KASLR enabled Feb 13 15:35:54.987590 kernel: efi: EFI v2.7 by EDK II Feb 13 15:35:54.987596 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Feb 13 15:35:54.987602 kernel: random: crng init done Feb 13 15:35:54.987609 kernel: secureboot: Secure boot disabled Feb 13 15:35:54.987615 kernel: ACPI: Early table checksum verification disabled Feb 13 15:35:54.987627 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 15:35:54.987635 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:35:54.987641 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:54.987654 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:54.987660 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:54.987666 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:54.987673 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:54.987681 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:54.987687 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:54.987693 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:54.987699 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:35:54.987706 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 15:35:54.987712 kernel: NUMA: Failed to initialise from firmware Feb 13 15:35:54.987718 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:35:54.987724 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 15:35:54.987730 kernel: Zone ranges: Feb 13 15:35:54.987736 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:35:54.987744 kernel: DMA32 empty Feb 13 15:35:54.987754 kernel: Normal empty Feb 13 15:35:54.987760 kernel: Movable zone start for each node Feb 13 15:35:54.987767 kernel: Early memory node ranges Feb 13 15:35:54.987773 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 15:35:54.987779 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 15:35:54.987785 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 15:35:54.987792 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 15:35:54.987798 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 15:35:54.987804 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 15:35:54.987810 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 15:35:54.987817 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:35:54.987825 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 15:35:54.987831 kernel: psci: probing for conduit method from ACPI. Feb 13 15:35:54.987837 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:35:54.987846 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:35:54.987853 kernel: psci: Trusted OS migration not required Feb 13 15:35:54.987860 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:35:54.987868 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:35:54.987874 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:35:54.987881 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:35:54.987888 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 15:35:54.987894 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:35:54.987901 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:35:54.987907 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:35:54.987914 kernel: CPU features: detected: Spectre-v4 Feb 13 15:35:54.987920 kernel: CPU features: detected: Spectre-BHB Feb 13 15:35:54.987927 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:35:54.987935 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:35:54.987942 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:35:54.987949 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:35:54.987955 kernel: alternatives: applying boot alternatives Feb 13 15:35:54.987963 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:35:54.987970 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:35:54.987976 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:35:54.987983 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:35:54.987989 kernel: Fallback order for Node 0: 0 Feb 13 15:35:54.987996 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 15:35:54.988003 kernel: Policy zone: DMA Feb 13 15:35:54.988011 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:35:54.988017 kernel: software IO TLB: area num 4. Feb 13 15:35:54.988024 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 15:35:54.988031 kernel: Memory: 2386324K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185964K reserved, 0K cma-reserved) Feb 13 15:35:54.988038 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:35:54.988045 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:35:54.988052 kernel: rcu: RCU event tracing is enabled. Feb 13 15:35:54.988059 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:35:54.988065 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:35:54.988072 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:35:54.988078 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:35:54.988092 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:35:54.988100 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:35:54.988107 kernel: GICv3: 256 SPIs implemented Feb 13 15:35:54.988113 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:35:54.988120 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:35:54.988127 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:35:54.988133 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:35:54.988140 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:35:54.988147 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:35:54.988153 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:35:54.988160 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 15:35:54.988167 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 15:35:54.988175 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:35:54.988182 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:35:54.988188 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:35:54.988195 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:35:54.988202 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:35:54.988209 kernel: arm-pv: using stolen time PV Feb 13 15:35:54.988216 kernel: Console: colour dummy device 80x25 Feb 13 15:35:54.988223 kernel: ACPI: Core revision 20230628 Feb 13 15:35:54.988230 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:35:54.988237 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:35:54.988246 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:35:54.988253 kernel: landlock: Up and running. Feb 13 15:35:54.988259 kernel: SELinux: Initializing. Feb 13 15:35:54.988269 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:35:54.988277 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:35:54.988284 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:35:54.988292 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:35:54.988299 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:35:54.988306 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:35:54.988322 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:35:54.988330 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:35:54.988337 kernel: Remapping and enabling EFI services. Feb 13 15:35:54.988344 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:35:54.988351 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:35:54.988374 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:35:54.988384 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 15:35:54.988391 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:35:54.988398 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:35:54.988406 kernel: Detected PIPT I-cache on CPU2 Feb 13 15:35:54.988419 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 15:35:54.988428 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 15:35:54.988440 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:35:54.988449 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 15:35:54.988456 kernel: Detected PIPT I-cache on CPU3 Feb 13 15:35:54.988464 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 15:35:54.988471 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 15:35:54.988481 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:35:54.988488 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 15:35:54.988497 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:35:54.988504 kernel: SMP: Total of 4 processors activated. Feb 13 15:35:54.988512 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:35:54.988519 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:35:54.988527 kernel: CPU features: detected: Common not Private translations Feb 13 15:35:54.988537 kernel: CPU features: detected: CRC32 instructions Feb 13 15:35:54.988545 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:35:54.988557 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:35:54.988565 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:35:54.988572 kernel: CPU features: detected: Privileged Access Never Feb 13 15:35:54.988579 kernel: CPU features: detected: RAS Extension Support Feb 13 15:35:54.988586 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:35:54.988594 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:35:54.988601 kernel: alternatives: applying system-wide alternatives Feb 13 15:35:54.988608 kernel: devtmpfs: initialized Feb 13 15:35:54.988615 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:35:54.988623 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:35:54.988632 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:35:54.988639 kernel: SMBIOS 3.0.0 present. Feb 13 15:35:54.988651 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 15:35:54.988658 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:35:54.988666 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:35:54.988673 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:35:54.988680 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:35:54.988688 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:35:54.988695 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 15:35:54.988704 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:35:54.988711 kernel: cpuidle: using governor menu Feb 13 15:35:54.988718 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:35:54.988726 kernel: ASID allocator initialised with 32768 entries Feb 13 15:35:54.988733 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:35:54.988740 kernel: Serial: AMBA PL011 UART driver Feb 13 15:35:54.988748 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:35:54.988758 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:35:54.988766 kernel: Modules: 508960 pages in range for PLT usage Feb 13 15:35:54.988774 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:35:54.988782 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:35:54.988790 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:35:54.988797 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:35:54.988804 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:35:54.988812 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:35:54.988819 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:35:54.988826 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:35:54.988834 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:35:54.988843 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:35:54.988851 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:35:54.988858 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:35:54.988865 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:35:54.988873 kernel: ACPI: Interpreter enabled Feb 13 15:35:54.988880 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:35:54.988887 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:35:54.988895 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:35:54.988902 kernel: printk: console [ttyAMA0] enabled Feb 13 15:35:54.988911 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:35:54.989052 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:35:54.989138 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:35:54.989208 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:35:54.989279 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:35:54.989366 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:35:54.989377 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:35:54.989387 kernel: PCI host bridge to bus 0000:00 Feb 13 15:35:54.989458 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:35:54.989518 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:35:54.989578 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:35:54.989636 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:35:54.989732 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:35:54.989820 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:35:54.989892 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 15:35:54.989960 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 15:35:54.990027 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:35:54.990101 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:35:54.990172 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 15:35:54.990244 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 15:35:54.990304 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:35:54.990383 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:35:54.990444 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:35:54.990454 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:35:54.990461 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:35:54.990469 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:35:54.990476 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:35:54.990484 kernel: iommu: Default domain type: Translated Feb 13 15:35:54.990491 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:35:54.990501 kernel: efivars: Registered efivars operations Feb 13 15:35:54.990508 kernel: vgaarb: loaded Feb 13 15:35:54.990520 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:35:54.990528 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:35:54.990535 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:35:54.990543 kernel: pnp: PnP ACPI init Feb 13 15:35:54.990616 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:35:54.990626 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:35:54.990637 kernel: NET: Registered PF_INET protocol family Feb 13 15:35:54.990644 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:35:54.990652 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:35:54.990659 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:35:54.990667 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:35:54.990674 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:35:54.990681 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:35:54.990688 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:35:54.990696 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:35:54.990705 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:35:54.990712 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:35:54.990719 kernel: kvm [1]: HYP mode not available Feb 13 15:35:54.990727 kernel: Initialise system trusted keyrings Feb 13 15:35:54.990734 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:35:54.990741 kernel: Key type asymmetric registered Feb 13 15:35:54.990748 kernel: Asymmetric key parser 'x509' registered Feb 13 15:35:54.990762 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:35:54.990769 kernel: io scheduler mq-deadline registered Feb 13 15:35:54.990778 kernel: io scheduler kyber registered Feb 13 15:35:54.990785 kernel: io scheduler bfq registered Feb 13 15:35:54.990792 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:35:54.990800 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:35:54.990807 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:35:54.990880 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 15:35:54.990894 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:35:54.990901 kernel: thunder_xcv, ver 1.0 Feb 13 15:35:54.990909 kernel: thunder_bgx, ver 1.0 Feb 13 15:35:54.990918 kernel: nicpf, ver 1.0 Feb 13 15:35:54.990926 kernel: nicvf, ver 1.0 Feb 13 15:35:54.990998 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:35:54.991059 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:35:54 UTC (1739460954) Feb 13 15:35:54.991068 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:35:54.991076 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:35:54.991089 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:35:54.991097 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:35:54.991106 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:35:54.991113 kernel: Segment Routing with IPv6 Feb 13 15:35:54.991120 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:35:54.991127 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:35:54.991134 kernel: Key type dns_resolver registered Feb 13 15:35:54.991142 kernel: registered taskstats version 1 Feb 13 15:35:54.991149 kernel: Loading compiled-in X.509 certificates Feb 13 15:35:54.991156 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4531cdb19689f90a81e7969ac7d8e25a95254f51' Feb 13 15:35:54.991163 kernel: Key type .fscrypt registered Feb 13 15:35:54.991171 kernel: Key type fscrypt-provisioning registered Feb 13 15:35:54.991179 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:35:54.991186 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:35:54.991193 kernel: ima: No architecture policies found Feb 13 15:35:54.991203 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:35:54.991210 kernel: clk: Disabling unused clocks Feb 13 15:35:54.991217 kernel: Freeing unused kernel memory: 39680K Feb 13 15:35:54.991225 kernel: Run /init as init process Feb 13 15:35:54.991232 kernel: with arguments: Feb 13 15:35:54.991241 kernel: /init Feb 13 15:35:54.991248 kernel: with environment: Feb 13 15:35:54.991254 kernel: HOME=/ Feb 13 15:35:54.991262 kernel: TERM=linux Feb 13 15:35:54.991269 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:35:54.991278 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:35:54.991288 systemd[1]: Detected virtualization kvm. Feb 13 15:35:54.991296 systemd[1]: Detected architecture arm64. Feb 13 15:35:54.991305 systemd[1]: Running in initrd. Feb 13 15:35:54.991323 systemd[1]: No hostname configured, using default hostname. Feb 13 15:35:54.991331 systemd[1]: Hostname set to . Feb 13 15:35:54.991340 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:35:54.991347 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:35:54.991355 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:35:54.991363 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:35:54.991371 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:35:54.991381 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:35:54.991389 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:35:54.991396 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:35:54.991406 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:35:54.991414 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:35:54.991422 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:35:54.991430 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:35:54.991439 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:35:54.991447 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:35:54.991454 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:35:54.991462 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:35:54.991470 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:35:54.991478 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:35:54.991485 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:35:54.991493 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:35:54.991502 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:35:54.991510 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:35:54.991518 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:35:54.991529 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:35:54.991537 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:35:54.991544 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:35:54.991552 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:35:54.991560 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:35:54.991568 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:35:54.991577 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:35:54.991585 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:35:54.991593 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:35:54.991601 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:35:54.991608 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:35:54.991641 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 15:35:54.991664 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:35:54.991672 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:35:54.991682 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:35:54.991690 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:35:54.991699 systemd-journald[238]: Journal started Feb 13 15:35:54.991718 systemd-journald[238]: Runtime Journal (/run/log/journal/7b3d4eb01edc4552b3666741600879ec) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:35:54.977864 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 15:35:54.995990 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:35:54.997471 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:35:55.000688 kernel: Bridge firewalling registered Feb 13 15:35:54.999235 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 15:35:55.000093 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:35:55.004154 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:35:55.008456 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:35:55.012477 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:35:55.015486 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:35:55.018196 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:35:55.021028 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:35:55.023409 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:35:55.033473 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:35:55.035980 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:35:55.045779 dracut-cmdline[280]: dracut-dracut-053 Feb 13 15:35:55.048323 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:35:55.069416 systemd-resolved[283]: Positive Trust Anchors: Feb 13 15:35:55.069525 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:35:55.069558 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:35:55.076581 systemd-resolved[283]: Defaulting to hostname 'linux'. Feb 13 15:35:55.078057 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:35:55.079591 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:35:55.127349 kernel: SCSI subsystem initialized Feb 13 15:35:55.132327 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:35:55.141341 kernel: iscsi: registered transport (tcp) Feb 13 15:35:55.154441 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:35:55.154473 kernel: QLogic iSCSI HBA Driver Feb 13 15:35:55.202778 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:35:55.214471 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:35:55.232372 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:35:55.232435 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:35:55.232461 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:35:55.286357 kernel: raid6: neonx8 gen() 15130 MB/s Feb 13 15:35:55.302393 kernel: raid6: neonx4 gen() 11235 MB/s Feb 13 15:35:55.319341 kernel: raid6: neonx2 gen() 12947 MB/s Feb 13 15:35:55.336352 kernel: raid6: neonx1 gen() 9827 MB/s Feb 13 15:35:55.353342 kernel: raid6: int64x8 gen() 5753 MB/s Feb 13 15:35:55.370344 kernel: raid6: int64x4 gen() 7185 MB/s Feb 13 15:35:55.387342 kernel: raid6: int64x2 gen() 6008 MB/s Feb 13 15:35:55.404553 kernel: raid6: int64x1 gen() 5043 MB/s Feb 13 15:35:55.404576 kernel: raid6: using algorithm neonx8 gen() 15130 MB/s Feb 13 15:35:55.422450 kernel: raid6: .... xor() 11882 MB/s, rmw enabled Feb 13 15:35:55.422465 kernel: raid6: using neon recovery algorithm Feb 13 15:35:55.427337 kernel: xor: measuring software checksum speed Feb 13 15:35:55.428613 kernel: 8regs : 17238 MB/sec Feb 13 15:35:55.428625 kernel: 32regs : 19650 MB/sec Feb 13 15:35:55.429919 kernel: arm64_neon : 26945 MB/sec Feb 13 15:35:55.429931 kernel: xor: using function: arm64_neon (26945 MB/sec) Feb 13 15:35:55.482380 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:35:55.493556 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:35:55.506516 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:35:55.519141 systemd-udevd[465]: Using default interface naming scheme 'v255'. Feb 13 15:35:55.522276 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:35:55.525057 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:35:55.541821 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Feb 13 15:35:55.573395 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:35:55.587487 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:35:55.627911 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:35:55.637791 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:35:55.648773 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:35:55.652635 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:35:55.653934 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:35:55.656667 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:35:55.667565 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:35:55.677091 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:35:55.683333 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 15:35:55.698967 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:35:55.699104 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:35:55.699117 kernel: GPT:9289727 != 19775487 Feb 13 15:35:55.699127 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:35:55.699136 kernel: GPT:9289727 != 19775487 Feb 13 15:35:55.699152 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:35:55.699163 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:35:55.698873 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:35:55.698994 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:35:55.701232 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:35:55.702577 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:35:55.702751 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:35:55.705266 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:35:55.714111 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:35:55.728362 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:35:55.732350 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (525) Feb 13 15:35:55.732380 kernel: BTRFS: device fsid 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (512) Feb 13 15:35:55.736598 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:35:55.741188 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:35:55.748372 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:35:55.752457 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:35:55.753818 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:35:55.771494 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:35:55.773477 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:35:55.779981 disk-uuid[553]: Primary Header is updated. Feb 13 15:35:55.779981 disk-uuid[553]: Secondary Entries is updated. Feb 13 15:35:55.779981 disk-uuid[553]: Secondary Header is updated. Feb 13 15:35:55.792352 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:35:55.798008 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:35:56.802355 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:35:56.805568 disk-uuid[554]: The operation has completed successfully. Feb 13 15:35:56.824970 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:35:56.825085 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:35:56.844475 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:35:56.847365 sh[573]: Success Feb 13 15:35:56.861837 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:35:56.909780 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:35:56.911726 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:35:56.913383 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:35:56.926460 kernel: BTRFS info (device dm-0): first mount of filesystem 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 Feb 13 15:35:56.926498 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:35:56.926517 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:35:56.927570 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:35:56.929324 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:35:56.932239 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:35:56.933704 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:35:56.934508 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:35:56.937501 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:35:56.949068 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:35:56.949133 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:35:56.950336 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:35:56.955367 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:35:56.963172 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:35:56.965364 kernel: BTRFS info (device vda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:35:56.974364 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:35:56.979550 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:35:57.035564 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:35:57.043901 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:35:57.066504 systemd-networkd[758]: lo: Link UP Feb 13 15:35:57.066517 systemd-networkd[758]: lo: Gained carrier Feb 13 15:35:57.067308 systemd-networkd[758]: Enumeration completed Feb 13 15:35:57.067665 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:35:57.067828 systemd-networkd[758]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:35:57.067831 systemd-networkd[758]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:35:57.068738 systemd-networkd[758]: eth0: Link UP Feb 13 15:35:57.068741 systemd-networkd[758]: eth0: Gained carrier Feb 13 15:35:57.068747 systemd-networkd[758]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:35:57.069897 systemd[1]: Reached target network.target - Network. Feb 13 15:35:57.095390 systemd-networkd[758]: eth0: DHCPv4 address 10.0.0.129/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:35:57.099230 ignition[676]: Ignition 2.20.0 Feb 13 15:35:57.099240 ignition[676]: Stage: fetch-offline Feb 13 15:35:57.099273 ignition[676]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:35:57.099282 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:35:57.099457 ignition[676]: parsed url from cmdline: "" Feb 13 15:35:57.099460 ignition[676]: no config URL provided Feb 13 15:35:57.099465 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:35:57.099472 ignition[676]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:35:57.099498 ignition[676]: op(1): [started] loading QEMU firmware config module Feb 13 15:35:57.099502 ignition[676]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:35:57.106640 ignition[676]: op(1): [finished] loading QEMU firmware config module Feb 13 15:35:57.106660 ignition[676]: QEMU firmware config was not found. Ignoring... Feb 13 15:35:57.128648 ignition[676]: parsing config with SHA512: 34169ab58f5526c4a6b4f6ab634b1775ac698b8613a01e3e2830a5a53416a831924a04d44b8dc60b1dd2a57726d0a87d871cc174f5e97dafacccf7122f12e616 Feb 13 15:35:57.132996 unknown[676]: fetched base config from "system" Feb 13 15:35:57.133006 unknown[676]: fetched user config from "qemu" Feb 13 15:35:57.133437 ignition[676]: fetch-offline: fetch-offline passed Feb 13 15:35:57.135777 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:35:57.133519 ignition[676]: Ignition finished successfully Feb 13 15:35:57.137111 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:35:57.143461 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:35:57.154555 ignition[772]: Ignition 2.20.0 Feb 13 15:35:57.154564 ignition[772]: Stage: kargs Feb 13 15:35:57.154722 ignition[772]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:35:57.154731 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:35:57.158663 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:35:57.155601 ignition[772]: kargs: kargs passed Feb 13 15:35:57.155648 ignition[772]: Ignition finished successfully Feb 13 15:35:57.166484 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:35:57.175603 ignition[781]: Ignition 2.20.0 Feb 13 15:35:57.175613 ignition[781]: Stage: disks Feb 13 15:35:57.175770 ignition[781]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:35:57.175779 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:35:57.176643 ignition[781]: disks: disks passed Feb 13 15:35:57.178614 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:35:57.176687 ignition[781]: Ignition finished successfully Feb 13 15:35:57.180007 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:35:57.181428 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:35:57.183762 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:35:57.185291 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:35:57.187240 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:35:57.195461 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:35:57.204582 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:35:57.210112 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:35:57.212813 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:35:57.260335 kernel: EXT4-fs (vda9): mounted filesystem b8d8a7c2-9667-48db-9266-035fd118dfdf r/w with ordered data mode. Quota mode: none. Feb 13 15:35:57.261362 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:35:57.262369 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:35:57.278470 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:35:57.280404 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:35:57.281763 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:35:57.281806 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:35:57.289406 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (801) Feb 13 15:35:57.281829 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:35:57.290174 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:35:57.294354 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:35:57.294382 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:35:57.294399 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:35:57.294599 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:35:57.298163 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:35:57.300001 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:35:57.339987 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:35:57.344699 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:35:57.349289 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:35:57.353112 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:35:57.430984 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:35:57.438453 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:35:57.441202 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:35:57.446327 kernel: BTRFS info (device vda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:35:57.460131 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:35:57.463918 ignition[915]: INFO : Ignition 2.20.0 Feb 13 15:35:57.465898 ignition[915]: INFO : Stage: mount Feb 13 15:35:57.465898 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:35:57.465898 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:35:57.469531 ignition[915]: INFO : mount: mount passed Feb 13 15:35:57.469531 ignition[915]: INFO : Ignition finished successfully Feb 13 15:35:57.470379 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:35:57.481431 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:35:57.925174 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:35:57.942506 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:35:57.949960 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (928) Feb 13 15:35:57.952959 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:35:57.953002 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:35:57.953013 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:35:57.955354 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:35:57.956829 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:35:57.982401 ignition[945]: INFO : Ignition 2.20.0 Feb 13 15:35:57.982401 ignition[945]: INFO : Stage: files Feb 13 15:35:57.984099 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:35:57.984099 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:35:57.988159 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:35:57.989770 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:35:57.989770 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:35:57.993700 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:35:57.995240 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:35:57.995240 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:35:57.994327 unknown[945]: wrote ssh authorized keys file for user: core Feb 13 15:35:57.999348 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:35:57.999348 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:35:58.073467 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:35:58.288246 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:35:58.288246 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:35:58.292373 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:35:58.292373 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:35:58.292373 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:35:58.292373 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:35:58.292373 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:35:58.292373 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:35:58.292373 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:35:58.292373 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:35:58.292373 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:35:58.292373 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:35:58.292373 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:35:58.292373 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:35:58.292373 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 15:35:58.445440 systemd-networkd[758]: eth0: Gained IPv6LL Feb 13 15:35:58.620661 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:35:58.817886 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:35:58.817886 ignition[945]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:35:58.821754 ignition[945]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:35:58.821754 ignition[945]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:35:58.821754 ignition[945]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:35:58.821754 ignition[945]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 15:35:58.821754 ignition[945]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:35:58.821754 ignition[945]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:35:58.821754 ignition[945]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 15:35:58.821754 ignition[945]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:35:58.846160 ignition[945]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:35:58.849745 ignition[945]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:35:58.851297 ignition[945]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:35:58.851297 ignition[945]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:35:58.851297 ignition[945]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:35:58.851297 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:35:58.851297 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:35:58.851297 ignition[945]: INFO : files: files passed Feb 13 15:35:58.851297 ignition[945]: INFO : Ignition finished successfully Feb 13 15:35:58.853748 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:35:58.862562 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:35:58.865572 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:35:58.868406 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:35:58.868499 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:35:58.874185 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:35:58.876455 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:35:58.876455 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:35:58.879458 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:35:58.880353 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:35:58.882463 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:35:58.892471 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:35:58.911475 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:35:58.911603 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:35:58.913782 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:35:58.915682 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:35:58.917551 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:35:58.918360 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:35:58.933108 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:35:58.940471 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:35:58.949568 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:35:58.950940 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:35:58.953148 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:35:58.954932 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:35:58.955045 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:35:58.957612 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:35:58.959660 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:35:58.961299 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:35:58.962995 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:35:58.965014 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:35:58.967079 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:35:58.968969 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:35:58.970980 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:35:58.972997 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:35:58.974434 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:35:58.976033 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:35:58.976160 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:35:58.978551 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:35:58.980550 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:35:58.982552 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:35:58.982648 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:35:58.984624 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:35:58.984732 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:35:58.987454 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:35:58.987574 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:35:58.989493 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:35:58.991123 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:35:58.995385 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:35:58.996771 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:35:58.998973 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:35:59.000500 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:35:59.000654 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:35:59.002113 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:35:59.002256 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:35:59.003852 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:35:59.003965 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:35:59.005708 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:35:59.005808 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:35:59.017512 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:35:59.018471 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:35:59.018600 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:35:59.022528 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:35:59.024258 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:35:59.024403 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:35:59.025760 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:35:59.025929 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:35:59.030841 ignition[1001]: INFO : Ignition 2.20.0 Feb 13 15:35:59.030841 ignition[1001]: INFO : Stage: umount Feb 13 15:35:59.030841 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:35:59.030841 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:35:59.037566 ignition[1001]: INFO : umount: umount passed Feb 13 15:35:59.037566 ignition[1001]: INFO : Ignition finished successfully Feb 13 15:35:59.031506 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:35:59.032437 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:35:59.035982 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:35:59.040498 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:35:59.040584 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:35:59.043244 systemd[1]: Stopped target network.target - Network. Feb 13 15:35:59.044749 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:35:59.044807 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:35:59.046728 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:35:59.046775 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:35:59.048516 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:35:59.048622 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:35:59.050292 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:35:59.050348 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:35:59.052489 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:35:59.055571 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:35:59.065365 systemd-networkd[758]: eth0: DHCPv6 lease lost Feb 13 15:35:59.065567 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:35:59.065672 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:35:59.068591 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:35:59.068701 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:35:59.071294 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:35:59.071419 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:35:59.084448 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:35:59.085636 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:35:59.085698 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:35:59.088071 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:35:59.088120 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:35:59.089921 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:35:59.090018 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:35:59.092324 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:35:59.092374 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:35:59.094627 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:35:59.098968 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:35:59.099055 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:35:59.102568 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:35:59.102616 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:35:59.106710 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:35:59.106865 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:35:59.108932 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:35:59.109093 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:35:59.110686 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:35:59.110745 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:35:59.112192 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:35:59.112229 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:35:59.114250 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:35:59.114298 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:35:59.117386 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:35:59.117433 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:35:59.120237 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:35:59.120288 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:35:59.135509 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:35:59.136638 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:35:59.136699 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:35:59.138891 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:35:59.138936 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:35:59.141347 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:35:59.141435 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:35:59.143971 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:35:59.146746 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:35:59.156515 systemd[1]: Switching root. Feb 13 15:35:59.187333 systemd-journald[238]: Journal stopped Feb 13 15:35:59.910542 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 15:35:59.910590 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:35:59.910606 kernel: SELinux: policy capability open_perms=1 Feb 13 15:35:59.910615 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:35:59.910625 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:35:59.910634 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:35:59.910644 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:35:59.910653 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:35:59.910662 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:35:59.910671 kernel: audit: type=1403 audit(1739460959.327:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:35:59.910684 systemd[1]: Successfully loaded SELinux policy in 34.008ms. Feb 13 15:35:59.910703 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.850ms. Feb 13 15:35:59.910715 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:35:59.910725 systemd[1]: Detected virtualization kvm. Feb 13 15:35:59.910736 systemd[1]: Detected architecture arm64. Feb 13 15:35:59.910747 systemd[1]: Detected first boot. Feb 13 15:35:59.910757 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:35:59.910768 zram_generator::config[1046]: No configuration found. Feb 13 15:35:59.910782 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:35:59.910795 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:35:59.910805 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:35:59.910830 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:35:59.910841 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:35:59.910853 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:35:59.910863 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:35:59.910873 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:35:59.910884 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:35:59.910896 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:35:59.910907 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:35:59.910917 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:35:59.910928 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:35:59.910939 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:35:59.910950 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:35:59.910960 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:35:59.910971 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:35:59.910981 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:35:59.910994 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:35:59.911005 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:35:59.911018 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:35:59.911029 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:35:59.911040 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:35:59.911050 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:35:59.911066 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:35:59.911084 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:35:59.911095 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:35:59.911105 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:35:59.911116 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:35:59.911126 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:35:59.911137 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:35:59.911147 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:35:59.911157 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:35:59.911168 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:35:59.911180 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:35:59.911192 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:35:59.911204 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:35:59.911214 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:35:59.911224 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:35:59.911235 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:35:59.911246 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:35:59.911256 systemd[1]: Reached target machines.target - Containers. Feb 13 15:35:59.911267 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:35:59.911279 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:35:59.911289 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:35:59.911299 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:35:59.911310 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:35:59.911327 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:35:59.911337 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:35:59.911348 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:35:59.911358 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:35:59.911368 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:35:59.911380 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:35:59.911390 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:35:59.911401 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:35:59.911411 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:35:59.911421 kernel: fuse: init (API version 7.39) Feb 13 15:35:59.911430 kernel: loop: module loaded Feb 13 15:35:59.911440 kernel: ACPI: bus type drm_connector registered Feb 13 15:35:59.911449 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:35:59.911459 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:35:59.911471 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:35:59.911482 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:35:59.911509 systemd-journald[1113]: Collecting audit messages is disabled. Feb 13 15:35:59.911531 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:35:59.911541 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:35:59.911557 systemd-journald[1113]: Journal started Feb 13 15:35:59.911579 systemd-journald[1113]: Runtime Journal (/run/log/journal/7b3d4eb01edc4552b3666741600879ec) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:35:59.716114 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:35:59.731383 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:35:59.731715 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:35:59.912919 systemd[1]: Stopped verity-setup.service. Feb 13 15:35:59.916647 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:35:59.917263 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:35:59.918455 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:35:59.919540 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:35:59.920564 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:35:59.921639 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:35:59.922721 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:35:59.923843 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:35:59.926371 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:35:59.927836 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:35:59.927976 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:35:59.929431 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:35:59.929561 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:35:59.930900 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:35:59.931041 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:35:59.933752 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:35:59.933940 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:35:59.935499 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:35:59.936385 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:35:59.937747 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:35:59.937887 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:35:59.939288 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:35:59.940677 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:35:59.942175 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:35:59.954806 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:35:59.966421 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:35:59.968684 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:35:59.969827 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:35:59.969881 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:35:59.971840 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:35:59.974053 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:35:59.976193 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:35:59.977378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:35:59.978801 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:35:59.981478 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:35:59.982738 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:35:59.984504 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:35:59.985598 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:35:59.989559 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:35:59.992501 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:35:59.995668 systemd-journald[1113]: Time spent on flushing to /var/log/journal/7b3d4eb01edc4552b3666741600879ec is 14.798ms for 855 entries. Feb 13 15:35:59.995668 systemd-journald[1113]: System Journal (/var/log/journal/7b3d4eb01edc4552b3666741600879ec) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:36:00.028879 systemd-journald[1113]: Received client request to flush runtime journal. Feb 13 15:35:59.996973 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:36:00.002459 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:36:00.003964 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:36:00.005437 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:36:00.006718 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:36:00.010394 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:36:00.013577 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:36:00.029651 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:36:00.033878 kernel: loop0: detected capacity change from 0 to 189592 Feb 13 15:36:00.034089 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:36:00.037348 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:36:00.039962 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:36:00.046648 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:36:00.055233 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:36:00.056602 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:36:00.065558 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:36:00.067192 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:36:00.068567 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:36:00.095335 kernel: loop1: detected capacity change from 0 to 113536 Feb 13 15:36:00.097832 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 15:36:00.097851 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 15:36:00.101934 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:36:00.135346 kernel: loop2: detected capacity change from 0 to 116808 Feb 13 15:36:00.165350 kernel: loop3: detected capacity change from 0 to 189592 Feb 13 15:36:00.177450 kernel: loop4: detected capacity change from 0 to 113536 Feb 13 15:36:00.183671 kernel: loop5: detected capacity change from 0 to 116808 Feb 13 15:36:00.187710 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:36:00.188089 (sd-merge)[1182]: Merged extensions into '/usr'. Feb 13 15:36:00.192424 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:36:00.192440 systemd[1]: Reloading... Feb 13 15:36:00.246348 zram_generator::config[1208]: No configuration found. Feb 13 15:36:00.279672 ldconfig[1152]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:36:00.342417 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:36:00.377338 systemd[1]: Reloading finished in 184 ms. Feb 13 15:36:00.404497 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:36:00.405900 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:36:00.422493 systemd[1]: Starting ensure-sysext.service... Feb 13 15:36:00.424876 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:36:00.432822 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:36:00.432837 systemd[1]: Reloading... Feb 13 15:36:00.440665 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:36:00.440912 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:36:00.441656 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:36:00.441866 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Feb 13 15:36:00.441915 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Feb 13 15:36:00.443845 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:36:00.443861 systemd-tmpfiles[1243]: Skipping /boot Feb 13 15:36:00.450487 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:36:00.450504 systemd-tmpfiles[1243]: Skipping /boot Feb 13 15:36:00.479394 zram_generator::config[1270]: No configuration found. Feb 13 15:36:00.554671 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:36:00.589150 systemd[1]: Reloading finished in 156 ms. Feb 13 15:36:00.603130 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:36:00.615781 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:36:00.623515 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:36:00.625901 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:36:00.628155 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:36:00.634578 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:36:00.639681 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:36:00.641863 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:36:00.645033 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:36:00.648819 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:36:00.652275 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:36:00.656277 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:36:00.657680 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:36:00.659007 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:36:00.661956 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:36:00.662126 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:36:00.663734 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:36:00.663851 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:36:00.667559 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:36:00.667695 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:36:00.673971 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:36:00.677311 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Feb 13 15:36:00.682705 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:36:00.688446 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:36:00.690619 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:36:00.691743 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:36:00.695843 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:36:00.700567 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:36:00.705347 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:36:00.707097 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:36:00.708899 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:36:00.709421 augenrules[1355]: No rules Feb 13 15:36:00.710448 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:36:00.710575 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:36:00.712144 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:36:00.712304 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:36:00.713896 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:36:00.714029 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:36:00.715735 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:36:00.715853 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:36:00.717497 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:36:00.729822 systemd[1]: Finished ensure-sysext.service. Feb 13 15:36:00.740543 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:36:00.741482 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:36:00.744499 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:36:00.745335 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1345) Feb 13 15:36:00.747461 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:36:00.753594 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:36:00.756899 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:36:00.758040 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:36:00.759647 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:36:00.762840 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:36:00.767487 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:36:00.767826 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:36:00.769362 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:36:00.769524 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:36:00.770954 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:36:00.771112 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:36:00.773001 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:36:00.773157 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:36:00.776725 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:36:00.776857 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:36:00.784895 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:36:00.788733 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:36:00.791875 augenrules[1375]: /sbin/augenrules: No change Feb 13 15:36:00.795557 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:36:00.796791 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:36:00.796857 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:36:00.805748 augenrules[1413]: No rules Feb 13 15:36:00.807044 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:36:00.808388 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:36:00.842469 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:36:00.847448 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:36:00.848777 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:36:00.872984 systemd-networkd[1386]: lo: Link UP Feb 13 15:36:00.872995 systemd-networkd[1386]: lo: Gained carrier Feb 13 15:36:00.873875 systemd-networkd[1386]: Enumeration completed Feb 13 15:36:00.873997 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:36:00.875441 systemd-resolved[1310]: Positive Trust Anchors: Feb 13 15:36:00.875756 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:36:00.875849 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:36:00.877558 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:36:00.877568 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:36:00.878359 systemd-networkd[1386]: eth0: Link UP Feb 13 15:36:00.878369 systemd-networkd[1386]: eth0: Gained carrier Feb 13 15:36:00.878383 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:36:00.886996 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:36:00.892189 systemd-resolved[1310]: Defaulting to hostname 'linux'. Feb 13 15:36:00.895436 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.129/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:36:00.895801 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:36:00.897587 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Feb 13 15:36:00.898613 systemd-timesyncd[1389]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:36:00.898662 systemd-timesyncd[1389]: Initial clock synchronization to Thu 2025-02-13 15:36:00.659068 UTC. Feb 13 15:36:00.899465 systemd[1]: Reached target network.target - Network. Feb 13 15:36:00.900568 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:36:00.911554 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:36:00.920562 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:36:00.925438 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:36:00.949005 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:36:00.955381 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:36:00.987940 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:36:00.989504 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:36:00.992397 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:36:00.993549 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:36:00.994821 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:36:00.996189 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:36:00.997379 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:36:00.998759 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:36:00.999990 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:36:01.000026 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:36:01.000902 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:36:01.002337 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:36:01.004626 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:36:01.014260 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:36:01.016425 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:36:01.017934 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:36:01.019109 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:36:01.020051 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:36:01.021033 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:36:01.021063 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:36:01.021924 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:36:01.026330 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:36:01.023895 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:36:01.026945 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:36:01.030660 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:36:01.031807 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:36:01.033157 jq[1443]: false Feb 13 15:36:01.033530 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:36:01.035663 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:36:01.040511 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:36:01.043129 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:36:01.047912 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:36:01.052564 extend-filesystems[1444]: Found loop3 Feb 13 15:36:01.052564 extend-filesystems[1444]: Found loop4 Feb 13 15:36:01.052564 extend-filesystems[1444]: Found loop5 Feb 13 15:36:01.052564 extend-filesystems[1444]: Found vda Feb 13 15:36:01.052564 extend-filesystems[1444]: Found vda1 Feb 13 15:36:01.052564 extend-filesystems[1444]: Found vda2 Feb 13 15:36:01.052564 extend-filesystems[1444]: Found vda3 Feb 13 15:36:01.052564 extend-filesystems[1444]: Found usr Feb 13 15:36:01.052564 extend-filesystems[1444]: Found vda4 Feb 13 15:36:01.052564 extend-filesystems[1444]: Found vda6 Feb 13 15:36:01.052564 extend-filesystems[1444]: Found vda7 Feb 13 15:36:01.052564 extend-filesystems[1444]: Found vda9 Feb 13 15:36:01.052564 extend-filesystems[1444]: Checking size of /dev/vda9 Feb 13 15:36:01.051445 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:36:01.051895 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:36:01.052576 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:36:01.058814 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:36:01.064181 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:36:01.076208 jq[1459]: true Feb 13 15:36:01.075796 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:36:01.075961 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:36:01.076276 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:36:01.077060 dbus-daemon[1442]: [system] SELinux support is enabled Feb 13 15:36:01.077497 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:36:01.078880 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:36:01.082395 extend-filesystems[1444]: Resized partition /dev/vda9 Feb 13 15:36:01.086188 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:36:01.086361 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:36:01.086723 extend-filesystems[1466]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:36:01.099343 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1373) Feb 13 15:36:01.103326 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:36:01.114075 update_engine[1457]: I20250213 15:36:01.113276 1457 main.cc:92] Flatcar Update Engine starting Feb 13 15:36:01.120847 jq[1468]: true Feb 13 15:36:01.117871 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:36:01.123849 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:36:01.123883 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:36:01.126230 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:36:01.126254 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:36:01.127823 update_engine[1457]: I20250213 15:36:01.127732 1457 update_check_scheduler.cc:74] Next update check in 11m13s Feb 13 15:36:01.128437 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:36:01.131638 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:36:01.133462 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:36:01.133843 tar[1467]: linux-arm64/helm Feb 13 15:36:01.134145 systemd-logind[1452]: New seat seat0. Feb 13 15:36:01.135028 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:36:01.143560 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:36:01.162931 extend-filesystems[1466]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:36:01.162931 extend-filesystems[1466]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:36:01.162931 extend-filesystems[1466]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:36:01.170247 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Feb 13 15:36:01.163646 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:36:01.163825 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:36:01.189029 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:36:01.193381 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:36:01.195280 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:36:01.209049 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:36:01.320233 containerd[1470]: time="2025-02-13T15:36:01.320133406Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:36:01.350220 containerd[1470]: time="2025-02-13T15:36:01.350116908Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:36:01.351853 containerd[1470]: time="2025-02-13T15:36:01.351813675Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:36:01.351940 containerd[1470]: time="2025-02-13T15:36:01.351926513Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:36:01.352071 containerd[1470]: time="2025-02-13T15:36:01.352053282Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:36:01.352428 containerd[1470]: time="2025-02-13T15:36:01.352404524Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:36:01.352558 containerd[1470]: time="2025-02-13T15:36:01.352540800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:36:01.352742 containerd[1470]: time="2025-02-13T15:36:01.352722513Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:36:01.352802 containerd[1470]: time="2025-02-13T15:36:01.352789603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:36:01.353143 containerd[1470]: time="2025-02-13T15:36:01.353121405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:36:01.353269 containerd[1470]: time="2025-02-13T15:36:01.353211466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:36:01.353354 containerd[1470]: time="2025-02-13T15:36:01.353337226Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:36:01.353402 containerd[1470]: time="2025-02-13T15:36:01.353390230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:36:01.353605 containerd[1470]: time="2025-02-13T15:36:01.353585951Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:36:01.353983 containerd[1470]: time="2025-02-13T15:36:01.353961174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:36:01.354298 containerd[1470]: time="2025-02-13T15:36:01.354268104Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:36:01.354460 containerd[1470]: time="2025-02-13T15:36:01.354441436Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:36:01.354697 containerd[1470]: time="2025-02-13T15:36:01.354676464Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:36:01.354873 containerd[1470]: time="2025-02-13T15:36:01.354853521Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:36:01.359682 containerd[1470]: time="2025-02-13T15:36:01.359656331Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:36:01.359844 containerd[1470]: time="2025-02-13T15:36:01.359824851Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:36:01.360017 containerd[1470]: time="2025-02-13T15:36:01.359997019Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:36:01.360156 containerd[1470]: time="2025-02-13T15:36:01.360138959Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:36:01.360226 containerd[1470]: time="2025-02-13T15:36:01.360213460Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:36:01.360536 containerd[1470]: time="2025-02-13T15:36:01.360468782Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:36:01.360989 containerd[1470]: time="2025-02-13T15:36:01.360965535Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:36:01.361272 containerd[1470]: time="2025-02-13T15:36:01.361240841Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:36:01.361337 containerd[1470]: time="2025-02-13T15:36:01.361274599Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:36:01.361337 containerd[1470]: time="2025-02-13T15:36:01.361291789Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:36:01.361337 containerd[1470]: time="2025-02-13T15:36:01.361310453Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:36:01.361401 containerd[1470]: time="2025-02-13T15:36:01.361344483Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:36:01.361401 containerd[1470]: time="2025-02-13T15:36:01.361357133Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:36:01.361401 containerd[1470]: time="2025-02-13T15:36:01.361369511Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:36:01.361401 containerd[1470]: time="2025-02-13T15:36:01.361383324Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:36:01.361401 containerd[1470]: time="2025-02-13T15:36:01.361395974Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:36:01.361479 containerd[1470]: time="2025-02-13T15:36:01.361407266Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:36:01.361479 containerd[1470]: time="2025-02-13T15:36:01.361417704Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:36:01.361479 containerd[1470]: time="2025-02-13T15:36:01.361437260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:36:01.361479 containerd[1470]: time="2025-02-13T15:36:01.361450919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:36:01.361479 containerd[1470]: time="2025-02-13T15:36:01.361461861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:36:01.361479 containerd[1470]: time="2025-02-13T15:36:01.361473036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:36:01.361580 containerd[1470]: time="2025-02-13T15:36:01.361483940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:36:01.361580 containerd[1470]: time="2025-02-13T15:36:01.361496551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:36:01.361580 containerd[1470]: time="2025-02-13T15:36:01.361507726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:36:01.361580 containerd[1470]: time="2025-02-13T15:36:01.361519018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:36:01.361580 containerd[1470]: time="2025-02-13T15:36:01.361530891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:36:01.361580 containerd[1470]: time="2025-02-13T15:36:01.361544472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:36:01.361580 containerd[1470]: time="2025-02-13T15:36:01.361554794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:36:01.361580 containerd[1470]: time="2025-02-13T15:36:01.361565581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:36:01.361580 containerd[1470]: time="2025-02-13T15:36:01.361577066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:36:01.361722 containerd[1470]: time="2025-02-13T15:36:01.361592743Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:36:01.361722 containerd[1470]: time="2025-02-13T15:36:01.361615326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:36:01.361722 containerd[1470]: time="2025-02-13T15:36:01.361628170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:36:01.361722 containerd[1470]: time="2025-02-13T15:36:01.361645282Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:36:01.361829 containerd[1470]: time="2025-02-13T15:36:01.361813336Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:36:01.361856 containerd[1470]: time="2025-02-13T15:36:01.361835182Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:36:01.361856 containerd[1470]: time="2025-02-13T15:36:01.361847987Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:36:01.361907 containerd[1470]: time="2025-02-13T15:36:01.361860094Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:36:01.361907 containerd[1470]: time="2025-02-13T15:36:01.361869212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:36:01.361907 containerd[1470]: time="2025-02-13T15:36:01.361880426Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:36:01.361907 containerd[1470]: time="2025-02-13T15:36:01.361888963Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:36:01.361907 containerd[1470]: time="2025-02-13T15:36:01.361905144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:36:01.362263 containerd[1470]: time="2025-02-13T15:36:01.362213121Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:36:01.362263 containerd[1470]: time="2025-02-13T15:36:01.362261004Z" level=info msg="Connect containerd service" Feb 13 15:36:01.362434 containerd[1470]: time="2025-02-13T15:36:01.362299147Z" level=info msg="using legacy CRI server" Feb 13 15:36:01.362434 containerd[1470]: time="2025-02-13T15:36:01.362306480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:36:01.362548 containerd[1470]: time="2025-02-13T15:36:01.362527462Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:36:01.363134 containerd[1470]: time="2025-02-13T15:36:01.363110435Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:36:01.364093 containerd[1470]: time="2025-02-13T15:36:01.363375962Z" level=info msg="Start subscribing containerd event" Feb 13 15:36:01.364093 containerd[1470]: time="2025-02-13T15:36:01.363422991Z" level=info msg="Start recovering state" Feb 13 15:36:01.364093 containerd[1470]: time="2025-02-13T15:36:01.363484881Z" level=info msg="Start event monitor" Feb 13 15:36:01.364093 containerd[1470]: time="2025-02-13T15:36:01.363496949Z" level=info msg="Start snapshots syncer" Feb 13 15:36:01.364093 containerd[1470]: time="2025-02-13T15:36:01.363505912Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:36:01.364093 containerd[1470]: time="2025-02-13T15:36:01.363513750Z" level=info msg="Start streaming server" Feb 13 15:36:01.364093 containerd[1470]: time="2025-02-13T15:36:01.363670552Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:36:01.364093 containerd[1470]: time="2025-02-13T15:36:01.363708113Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:36:01.364093 containerd[1470]: time="2025-02-13T15:36:01.363757238Z" level=info msg="containerd successfully booted in 0.044427s" Feb 13 15:36:01.363856 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:36:01.490405 tar[1467]: linux-arm64/LICENSE Feb 13 15:36:01.490617 tar[1467]: linux-arm64/README.md Feb 13 15:36:01.507189 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:36:01.859868 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:36:01.877737 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:36:01.891617 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:36:01.896808 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:36:01.897027 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:36:01.899741 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:36:01.911004 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:36:01.913772 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:36:01.915870 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:36:01.917267 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:36:02.605490 systemd-networkd[1386]: eth0: Gained IPv6LL Feb 13 15:36:02.607903 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:36:02.609853 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:36:02.619648 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:36:02.622103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:02.624257 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:36:02.637988 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:36:02.638217 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:36:02.639900 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:36:02.647130 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:36:03.164833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:03.166410 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:36:03.167714 systemd[1]: Startup finished in 622ms (kernel) + 4.603s (initrd) + 3.879s (userspace) = 9.105s. Feb 13 15:36:03.170619 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:36:03.573512 kubelet[1556]: E0213 15:36:03.573409 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:36:03.575461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:36:03.575598 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:36:07.009231 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:36:07.010526 systemd[1]: Started sshd@0-10.0.0.129:22-10.0.0.1:57112.service - OpenSSH per-connection server daemon (10.0.0.1:57112). Feb 13 15:36:07.074176 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 57112 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:36:07.076583 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:07.090691 systemd-logind[1452]: New session 1 of user core. Feb 13 15:36:07.091652 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:36:07.099578 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:36:07.108748 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:36:07.110986 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:36:07.117420 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:36:07.199769 systemd[1573]: Queued start job for default target default.target. Feb 13 15:36:07.215296 systemd[1573]: Created slice app.slice - User Application Slice. Feb 13 15:36:07.215365 systemd[1573]: Reached target paths.target - Paths. Feb 13 15:36:07.215378 systemd[1573]: Reached target timers.target - Timers. Feb 13 15:36:07.216604 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:36:07.226562 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:36:07.226635 systemd[1573]: Reached target sockets.target - Sockets. Feb 13 15:36:07.226647 systemd[1573]: Reached target basic.target - Basic System. Feb 13 15:36:07.226693 systemd[1573]: Reached target default.target - Main User Target. Feb 13 15:36:07.226722 systemd[1573]: Startup finished in 104ms. Feb 13 15:36:07.227071 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:36:07.228787 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:36:07.285574 systemd[1]: Started sshd@1-10.0.0.129:22-10.0.0.1:57120.service - OpenSSH per-connection server daemon (10.0.0.1:57120). Feb 13 15:36:07.326909 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 57120 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:36:07.328120 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:07.332268 systemd-logind[1452]: New session 2 of user core. Feb 13 15:36:07.346527 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:36:07.397603 sshd[1586]: Connection closed by 10.0.0.1 port 57120 Feb 13 15:36:07.398039 sshd-session[1584]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:07.404297 systemd[1]: sshd@1-10.0.0.129:22-10.0.0.1:57120.service: Deactivated successfully. Feb 13 15:36:07.405644 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:36:07.407473 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:36:07.407876 systemd[1]: Started sshd@2-10.0.0.129:22-10.0.0.1:57122.service - OpenSSH per-connection server daemon (10.0.0.1:57122). Feb 13 15:36:07.408483 systemd-logind[1452]: Removed session 2. Feb 13 15:36:07.450112 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 57122 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:36:07.451347 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:07.455292 systemd-logind[1452]: New session 3 of user core. Feb 13 15:36:07.462457 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:36:07.510563 sshd[1593]: Connection closed by 10.0.0.1 port 57122 Feb 13 15:36:07.510453 sshd-session[1591]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:07.524670 systemd[1]: sshd@2-10.0.0.129:22-10.0.0.1:57122.service: Deactivated successfully. Feb 13 15:36:07.525965 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:36:07.527117 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:36:07.528119 systemd[1]: Started sshd@3-10.0.0.129:22-10.0.0.1:57132.service - OpenSSH per-connection server daemon (10.0.0.1:57132). Feb 13 15:36:07.529225 systemd-logind[1452]: Removed session 3. Feb 13 15:36:07.571229 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 57132 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:36:07.572654 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:07.577124 systemd-logind[1452]: New session 4 of user core. Feb 13 15:36:07.588528 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:36:07.639236 sshd[1600]: Connection closed by 10.0.0.1 port 57132 Feb 13 15:36:07.639541 sshd-session[1598]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:07.655565 systemd[1]: sshd@3-10.0.0.129:22-10.0.0.1:57132.service: Deactivated successfully. Feb 13 15:36:07.656998 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:36:07.658240 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:36:07.659340 systemd[1]: Started sshd@4-10.0.0.129:22-10.0.0.1:57148.service - OpenSSH per-connection server daemon (10.0.0.1:57148). Feb 13 15:36:07.660108 systemd-logind[1452]: Removed session 4. Feb 13 15:36:07.702886 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 57148 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:36:07.704046 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:07.710913 systemd-logind[1452]: New session 5 of user core. Feb 13 15:36:07.721124 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:36:07.785583 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:36:07.785857 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:36:08.115573 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:36:08.115709 (dockerd)[1628]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:36:08.367841 dockerd[1628]: time="2025-02-13T15:36:08.367726287Z" level=info msg="Starting up" Feb 13 15:36:08.504021 dockerd[1628]: time="2025-02-13T15:36:08.503976723Z" level=info msg="Loading containers: start." Feb 13 15:36:08.643352 kernel: Initializing XFRM netlink socket Feb 13 15:36:08.711958 systemd-networkd[1386]: docker0: Link UP Feb 13 15:36:08.746566 dockerd[1628]: time="2025-02-13T15:36:08.746517885Z" level=info msg="Loading containers: done." Feb 13 15:36:08.760978 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4202209084-merged.mount: Deactivated successfully. Feb 13 15:36:08.766173 dockerd[1628]: time="2025-02-13T15:36:08.766125952Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:36:08.766244 dockerd[1628]: time="2025-02-13T15:36:08.766230667Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:36:08.766371 dockerd[1628]: time="2025-02-13T15:36:08.766349138Z" level=info msg="Daemon has completed initialization" Feb 13 15:36:08.794632 dockerd[1628]: time="2025-02-13T15:36:08.794569058Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:36:08.794802 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:36:09.459837 containerd[1470]: time="2025-02-13T15:36:09.459780096Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 15:36:10.078559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1208185635.mount: Deactivated successfully. Feb 13 15:36:11.103741 containerd[1470]: time="2025-02-13T15:36:11.103341020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:11.104545 containerd[1470]: time="2025-02-13T15:36:11.104283063Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620377" Feb 13 15:36:11.105693 containerd[1470]: time="2025-02-13T15:36:11.105636323Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:11.108788 containerd[1470]: time="2025-02-13T15:36:11.108755725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:11.110048 containerd[1470]: time="2025-02-13T15:36:11.109998422Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 1.650177173s" Feb 13 15:36:11.110048 containerd[1470]: time="2025-02-13T15:36:11.110043346Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 15:36:11.110951 containerd[1470]: time="2025-02-13T15:36:11.110860778Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 15:36:12.287156 containerd[1470]: time="2025-02-13T15:36:12.287099287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:12.288466 containerd[1470]: time="2025-02-13T15:36:12.288426956Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471775" Feb 13 15:36:12.289335 containerd[1470]: time="2025-02-13T15:36:12.289291081Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:12.292610 containerd[1470]: time="2025-02-13T15:36:12.292575892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:12.293343 containerd[1470]: time="2025-02-13T15:36:12.293287992Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 1.182389027s" Feb 13 15:36:12.293343 containerd[1470]: time="2025-02-13T15:36:12.293332960Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 15:36:12.294005 containerd[1470]: time="2025-02-13T15:36:12.293866419Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 15:36:13.358233 containerd[1470]: time="2025-02-13T15:36:13.358056598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:13.359138 containerd[1470]: time="2025-02-13T15:36:13.359098996Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024542" Feb 13 15:36:13.359866 containerd[1470]: time="2025-02-13T15:36:13.359814417Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:13.364969 containerd[1470]: time="2025-02-13T15:36:13.364932097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:13.366387 containerd[1470]: time="2025-02-13T15:36:13.366358088Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.07245873s" Feb 13 15:36:13.366426 containerd[1470]: time="2025-02-13T15:36:13.366385601Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 15:36:13.366977 containerd[1470]: time="2025-02-13T15:36:13.366764821Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 15:36:13.825881 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:36:13.835504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:13.929289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:13.933805 (kubelet)[1895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:36:13.982218 kubelet[1895]: E0213 15:36:13.982160 1895 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:36:13.985341 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:36:13.985498 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:36:14.402823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4032624045.mount: Deactivated successfully. Feb 13 15:36:14.618056 containerd[1470]: time="2025-02-13T15:36:14.617992101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:14.618739 containerd[1470]: time="2025-02-13T15:36:14.618685503Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769258" Feb 13 15:36:14.619350 containerd[1470]: time="2025-02-13T15:36:14.619304938Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:14.621188 containerd[1470]: time="2025-02-13T15:36:14.621146333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:14.621976 containerd[1470]: time="2025-02-13T15:36:14.621947962Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.255152641s" Feb 13 15:36:14.621976 containerd[1470]: time="2025-02-13T15:36:14.621976292Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 15:36:14.622536 containerd[1470]: time="2025-02-13T15:36:14.622496493Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:36:15.302779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3440987458.mount: Deactivated successfully. Feb 13 15:36:15.902686 containerd[1470]: time="2025-02-13T15:36:15.902635606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:15.903171 containerd[1470]: time="2025-02-13T15:36:15.903125614Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 15:36:15.904043 containerd[1470]: time="2025-02-13T15:36:15.903992076Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:15.907062 containerd[1470]: time="2025-02-13T15:36:15.907018841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:15.909168 containerd[1470]: time="2025-02-13T15:36:15.909139248Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.286609684s" Feb 13 15:36:15.909222 containerd[1470]: time="2025-02-13T15:36:15.909174007Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:36:15.909772 containerd[1470]: time="2025-02-13T15:36:15.909607835Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 15:36:16.400049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4178760355.mount: Deactivated successfully. Feb 13 15:36:16.404515 containerd[1470]: time="2025-02-13T15:36:16.404462682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:16.406072 containerd[1470]: time="2025-02-13T15:36:16.406024190Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 15:36:16.406962 containerd[1470]: time="2025-02-13T15:36:16.406925654Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:16.409411 containerd[1470]: time="2025-02-13T15:36:16.409363409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:16.410177 containerd[1470]: time="2025-02-13T15:36:16.410134920Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 500.488414ms" Feb 13 15:36:16.410177 containerd[1470]: time="2025-02-13T15:36:16.410174281Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 15:36:16.410773 containerd[1470]: time="2025-02-13T15:36:16.410588601Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 15:36:16.974351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount93808868.mount: Deactivated successfully. Feb 13 15:36:18.240328 containerd[1470]: time="2025-02-13T15:36:18.240268035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:18.241119 containerd[1470]: time="2025-02-13T15:36:18.241080395Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Feb 13 15:36:18.247248 containerd[1470]: time="2025-02-13T15:36:18.247211897Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:18.250879 containerd[1470]: time="2025-02-13T15:36:18.250815639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:18.252087 containerd[1470]: time="2025-02-13T15:36:18.252051685Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.841436266s" Feb 13 15:36:18.252087 containerd[1470]: time="2025-02-13T15:36:18.252084264Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 15:36:24.235844 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:36:24.244516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:24.328257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:24.332240 (kubelet)[2045]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:36:24.394158 kubelet[2045]: E0213 15:36:24.394097 2045 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:36:24.396619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:36:24.396768 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:36:24.671191 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:24.685574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:24.708471 systemd[1]: Reloading requested from client PID 2061 ('systemctl') (unit session-5.scope)... Feb 13 15:36:24.708486 systemd[1]: Reloading... Feb 13 15:36:24.778335 zram_generator::config[2101]: No configuration found. Feb 13 15:36:24.891954 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:36:24.942746 systemd[1]: Reloading finished in 233 ms. Feb 13 15:36:24.983170 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:36:24.983261 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:36:24.983625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:24.985147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:25.088292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:25.092819 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:36:25.128019 kubelet[2145]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:36:25.128019 kubelet[2145]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:36:25.128019 kubelet[2145]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:36:25.128391 kubelet[2145]: I0213 15:36:25.128157 2145 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:36:25.785099 kubelet[2145]: I0213 15:36:25.784346 2145 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:36:25.785099 kubelet[2145]: I0213 15:36:25.784386 2145 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:36:25.785099 kubelet[2145]: I0213 15:36:25.784796 2145 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:36:25.861432 kubelet[2145]: E0213 15:36:25.861393 2145 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.129:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:36:25.861918 kubelet[2145]: I0213 15:36:25.861905 2145 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:36:25.869653 kubelet[2145]: E0213 15:36:25.869616 2145 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:36:25.869653 kubelet[2145]: I0213 15:36:25.869651 2145 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:36:25.872976 kubelet[2145]: I0213 15:36:25.872949 2145 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:36:25.873734 kubelet[2145]: I0213 15:36:25.873704 2145 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:36:25.873880 kubelet[2145]: I0213 15:36:25.873853 2145 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:36:25.874064 kubelet[2145]: I0213 15:36:25.873883 2145 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:36:25.874203 kubelet[2145]: I0213 15:36:25.874191 2145 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:36:25.874226 kubelet[2145]: I0213 15:36:25.874204 2145 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:36:25.874418 kubelet[2145]: I0213 15:36:25.874403 2145 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:36:25.876246 kubelet[2145]: I0213 15:36:25.876216 2145 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:36:25.876246 kubelet[2145]: I0213 15:36:25.876246 2145 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:36:25.876348 kubelet[2145]: I0213 15:36:25.876337 2145 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:36:25.876377 kubelet[2145]: I0213 15:36:25.876351 2145 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:36:25.881094 kubelet[2145]: W0213 15:36:25.880966 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Feb 13 15:36:25.881094 kubelet[2145]: E0213 15:36:25.881044 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:36:25.881241 kubelet[2145]: I0213 15:36:25.881139 2145 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:36:25.881729 kubelet[2145]: W0213 15:36:25.881630 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Feb 13 15:36:25.881729 kubelet[2145]: E0213 15:36:25.881684 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:36:25.883230 kubelet[2145]: I0213 15:36:25.883019 2145 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:36:25.884401 kubelet[2145]: W0213 15:36:25.884337 2145 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:36:25.886728 kubelet[2145]: I0213 15:36:25.886595 2145 server.go:1269] "Started kubelet" Feb 13 15:36:25.886864 kubelet[2145]: I0213 15:36:25.886781 2145 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:36:25.888379 kubelet[2145]: I0213 15:36:25.888157 2145 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:36:25.888379 kubelet[2145]: I0213 15:36:25.888273 2145 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:36:25.888804 kubelet[2145]: I0213 15:36:25.888779 2145 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:36:25.889940 kubelet[2145]: I0213 15:36:25.889773 2145 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:36:25.889940 kubelet[2145]: I0213 15:36:25.889923 2145 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:36:25.892811 kubelet[2145]: I0213 15:36:25.891309 2145 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:36:25.892811 kubelet[2145]: I0213 15:36:25.891462 2145 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:36:25.892811 kubelet[2145]: I0213 15:36:25.891665 2145 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:36:25.892811 kubelet[2145]: E0213 15:36:25.891967 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:36:25.892811 kubelet[2145]: E0213 15:36:25.892072 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="200ms" Feb 13 15:36:25.893600 kubelet[2145]: I0213 15:36:25.893578 2145 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:36:25.893710 kubelet[2145]: I0213 15:36:25.893689 2145 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:36:25.894191 kubelet[2145]: W0213 15:36:25.894128 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Feb 13 15:36:25.894270 kubelet[2145]: E0213 15:36:25.894203 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:36:25.894817 kubelet[2145]: E0213 15:36:25.894689 2145 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:36:25.895991 kubelet[2145]: I0213 15:36:25.895956 2145 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:36:25.896211 kubelet[2145]: E0213 15:36:25.894971 2145 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.129:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.129:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823ce8cf46b57c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:36:25.88656224 +0000 UTC m=+0.790579607,LastTimestamp:2025-02-13 15:36:25.88656224 +0000 UTC m=+0.790579607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:36:25.907082 kubelet[2145]: I0213 15:36:25.907017 2145 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:36:25.908214 kubelet[2145]: I0213 15:36:25.908186 2145 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:36:25.908388 kubelet[2145]: I0213 15:36:25.908290 2145 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:36:25.908388 kubelet[2145]: I0213 15:36:25.908308 2145 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:36:25.908388 kubelet[2145]: E0213 15:36:25.908377 2145 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:36:25.909301 kubelet[2145]: W0213 15:36:25.908842 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Feb 13 15:36:25.909301 kubelet[2145]: E0213 15:36:25.908903 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:36:25.909301 kubelet[2145]: I0213 15:36:25.909050 2145 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:36:25.909301 kubelet[2145]: I0213 15:36:25.909064 2145 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:36:25.909301 kubelet[2145]: I0213 15:36:25.909083 2145 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:36:25.970251 kubelet[2145]: I0213 15:36:25.970218 2145 policy_none.go:49] "None policy: Start" Feb 13 15:36:25.971530 kubelet[2145]: I0213 15:36:25.971469 2145 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:36:25.971616 kubelet[2145]: I0213 15:36:25.971552 2145 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:36:25.977740 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:36:25.992106 kubelet[2145]: E0213 15:36:25.992050 2145 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:36:25.992245 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:36:25.995221 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:36:26.007418 kubelet[2145]: I0213 15:36:26.007371 2145 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:36:26.008053 kubelet[2145]: I0213 15:36:26.007590 2145 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:36:26.008053 kubelet[2145]: I0213 15:36:26.007610 2145 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:36:26.008053 kubelet[2145]: I0213 15:36:26.007908 2145 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:36:26.012483 kubelet[2145]: E0213 15:36:26.012447 2145 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:36:26.019064 systemd[1]: Created slice kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice - libcontainer container kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice. Feb 13 15:36:26.035809 systemd[1]: Created slice kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice - libcontainer container kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice. Feb 13 15:36:26.050772 systemd[1]: Created slice kubepods-burstable-pode40766202139838693e08ce69a9f7dbf.slice - libcontainer container kubepods-burstable-pode40766202139838693e08ce69a9f7dbf.slice. Feb 13 15:36:26.093585 kubelet[2145]: E0213 15:36:26.093532 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="400ms" Feb 13 15:36:26.109933 kubelet[2145]: I0213 15:36:26.109893 2145 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:36:26.110400 kubelet[2145]: E0213 15:36:26.110361 2145 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Feb 13 15:36:26.192528 kubelet[2145]: I0213 15:36:26.192470 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e40766202139838693e08ce69a9f7dbf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e40766202139838693e08ce69a9f7dbf\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:36:26.192528 kubelet[2145]: I0213 15:36:26.192535 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:26.192915 kubelet[2145]: I0213 15:36:26.192569 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:26.192915 kubelet[2145]: I0213 15:36:26.192600 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:36:26.192915 kubelet[2145]: I0213 15:36:26.192627 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e40766202139838693e08ce69a9f7dbf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e40766202139838693e08ce69a9f7dbf\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:36:26.192915 kubelet[2145]: I0213 15:36:26.192643 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e40766202139838693e08ce69a9f7dbf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e40766202139838693e08ce69a9f7dbf\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:36:26.192915 kubelet[2145]: I0213 15:36:26.192659 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:26.193019 kubelet[2145]: I0213 15:36:26.192673 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:26.193019 kubelet[2145]: I0213 15:36:26.192706 2145 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:26.311825 kubelet[2145]: I0213 15:36:26.311707 2145 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:36:26.312110 kubelet[2145]: E0213 15:36:26.312065 2145 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Feb 13 15:36:26.334484 kubelet[2145]: E0213 15:36:26.334436 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:26.335447 containerd[1470]: time="2025-02-13T15:36:26.335400026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:26.348930 kubelet[2145]: E0213 15:36:26.348866 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:26.349453 containerd[1470]: time="2025-02-13T15:36:26.349403571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:26.353751 kubelet[2145]: E0213 15:36:26.353667 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:26.354439 containerd[1470]: time="2025-02-13T15:36:26.354145328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e40766202139838693e08ce69a9f7dbf,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:26.494511 kubelet[2145]: E0213 15:36:26.494467 2145 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="800ms" Feb 13 15:36:26.713431 kubelet[2145]: I0213 15:36:26.713309 2145 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:36:26.713783 kubelet[2145]: E0213 15:36:26.713654 2145 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Feb 13 15:36:26.757433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1863826975.mount: Deactivated successfully. Feb 13 15:36:26.762899 containerd[1470]: time="2025-02-13T15:36:26.762847021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:36:26.765858 containerd[1470]: time="2025-02-13T15:36:26.765789772Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 15:36:26.766739 containerd[1470]: time="2025-02-13T15:36:26.766706716Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:36:26.767592 containerd[1470]: time="2025-02-13T15:36:26.767562965Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:36:26.769771 containerd[1470]: time="2025-02-13T15:36:26.769691661Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:36:26.771772 containerd[1470]: time="2025-02-13T15:36:26.770758487Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:36:26.771772 containerd[1470]: time="2025-02-13T15:36:26.771106796Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:36:26.774683 containerd[1470]: time="2025-02-13T15:36:26.774630968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:36:26.776301 containerd[1470]: time="2025-02-13T15:36:26.776093372Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 426.60653ms" Feb 13 15:36:26.777196 containerd[1470]: time="2025-02-13T15:36:26.777058306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 422.823753ms" Feb 13 15:36:26.778183 containerd[1470]: time="2025-02-13T15:36:26.778116141Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 442.632005ms" Feb 13 15:36:26.850833 kubelet[2145]: W0213 15:36:26.850751 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Feb 13 15:36:26.850833 kubelet[2145]: E0213 15:36:26.850823 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:36:26.910872 kubelet[2145]: W0213 15:36:26.907941 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Feb 13 15:36:26.910872 kubelet[2145]: E0213 15:36:26.908020 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:36:26.919915 containerd[1470]: time="2025-02-13T15:36:26.919790093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:26.919915 containerd[1470]: time="2025-02-13T15:36:26.919868289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:26.919915 containerd[1470]: time="2025-02-13T15:36:26.919884792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:26.920300 containerd[1470]: time="2025-02-13T15:36:26.920207808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:26.922396 containerd[1470]: time="2025-02-13T15:36:26.921478497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:26.922396 containerd[1470]: time="2025-02-13T15:36:26.921532160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:26.922396 containerd[1470]: time="2025-02-13T15:36:26.921543627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:26.922396 containerd[1470]: time="2025-02-13T15:36:26.921706694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:26.924145 containerd[1470]: time="2025-02-13T15:36:26.923599960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:26.924145 containerd[1470]: time="2025-02-13T15:36:26.923664052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:26.924145 containerd[1470]: time="2025-02-13T15:36:26.923679636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:26.924894 containerd[1470]: time="2025-02-13T15:36:26.924791573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:26.946538 systemd[1]: Started cri-containerd-21b6a988e90f5319d1fc31fc0fc45f9da452f8ef975e790d3476a822061858e2.scope - libcontainer container 21b6a988e90f5319d1fc31fc0fc45f9da452f8ef975e790d3476a822061858e2. Feb 13 15:36:26.948297 systemd[1]: Started cri-containerd-8fb7becf3425287e72e93f3ebae1abd824eb6f622bdd888b6542df69eff47578.scope - libcontainer container 8fb7becf3425287e72e93f3ebae1abd824eb6f622bdd888b6542df69eff47578. Feb 13 15:36:26.952178 systemd[1]: Started cri-containerd-b9eecfe876f0187633c30f86d571712fb995b3155f4e289f238db6a155585ad8.scope - libcontainer container b9eecfe876f0187633c30f86d571712fb995b3155f4e289f238db6a155585ad8. Feb 13 15:36:26.983103 containerd[1470]: time="2025-02-13T15:36:26.982866722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e40766202139838693e08ce69a9f7dbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"21b6a988e90f5319d1fc31fc0fc45f9da452f8ef975e790d3476a822061858e2\"" Feb 13 15:36:26.985373 kubelet[2145]: E0213 15:36:26.985285 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:26.988056 containerd[1470]: time="2025-02-13T15:36:26.988011171Z" level=info msg="CreateContainer within sandbox \"21b6a988e90f5319d1fc31fc0fc45f9da452f8ef975e790d3476a822061858e2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:36:26.989682 containerd[1470]: time="2025-02-13T15:36:26.989639239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fb7becf3425287e72e93f3ebae1abd824eb6f622bdd888b6542df69eff47578\"" Feb 13 15:36:26.990390 kubelet[2145]: E0213 15:36:26.990357 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:26.992996 containerd[1470]: time="2025-02-13T15:36:26.992871122Z" level=info msg="CreateContainer within sandbox \"8fb7becf3425287e72e93f3ebae1abd824eb6f622bdd888b6542df69eff47578\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:36:26.993585 containerd[1470]: time="2025-02-13T15:36:26.993493300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9eecfe876f0187633c30f86d571712fb995b3155f4e289f238db6a155585ad8\"" Feb 13 15:36:26.994404 kubelet[2145]: E0213 15:36:26.994377 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:26.996362 containerd[1470]: time="2025-02-13T15:36:26.996302152Z" level=info msg="CreateContainer within sandbox \"b9eecfe876f0187633c30f86d571712fb995b3155f4e289f238db6a155585ad8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:36:27.014295 containerd[1470]: time="2025-02-13T15:36:27.014236236Z" level=info msg="CreateContainer within sandbox \"21b6a988e90f5319d1fc31fc0fc45f9da452f8ef975e790d3476a822061858e2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f654fcb7b877ffa6d9b6459ba1f00078b40cb7e0bc6b0e5c044bee4152496719\"" Feb 13 15:36:27.015101 containerd[1470]: time="2025-02-13T15:36:27.015053755Z" level=info msg="StartContainer for \"f654fcb7b877ffa6d9b6459ba1f00078b40cb7e0bc6b0e5c044bee4152496719\"" Feb 13 15:36:27.015944 containerd[1470]: time="2025-02-13T15:36:27.015903884Z" level=info msg="CreateContainer within sandbox \"b9eecfe876f0187633c30f86d571712fb995b3155f4e289f238db6a155585ad8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7dfdc2db0f5322e8b20b633c7cbab28be5662956299d7815ffc5a32170b056de\"" Feb 13 15:36:27.016417 containerd[1470]: time="2025-02-13T15:36:27.016389472Z" level=info msg="StartContainer for \"7dfdc2db0f5322e8b20b633c7cbab28be5662956299d7815ffc5a32170b056de\"" Feb 13 15:36:27.017612 containerd[1470]: time="2025-02-13T15:36:27.017210069Z" level=info msg="CreateContainer within sandbox \"8fb7becf3425287e72e93f3ebae1abd824eb6f622bdd888b6542df69eff47578\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b36cbdb20df95298437981458da0f5de4e0fcc19652c72346f6a71f55dfe53f8\"" Feb 13 15:36:27.018270 containerd[1470]: time="2025-02-13T15:36:27.017872492Z" level=info msg="StartContainer for \"b36cbdb20df95298437981458da0f5de4e0fcc19652c72346f6a71f55dfe53f8\"" Feb 13 15:36:27.044546 systemd[1]: Started cri-containerd-7dfdc2db0f5322e8b20b633c7cbab28be5662956299d7815ffc5a32170b056de.scope - libcontainer container 7dfdc2db0f5322e8b20b633c7cbab28be5662956299d7815ffc5a32170b056de. Feb 13 15:36:27.045694 systemd[1]: Started cri-containerd-f654fcb7b877ffa6d9b6459ba1f00078b40cb7e0bc6b0e5c044bee4152496719.scope - libcontainer container f654fcb7b877ffa6d9b6459ba1f00078b40cb7e0bc6b0e5c044bee4152496719. Feb 13 15:36:27.048766 systemd[1]: Started cri-containerd-b36cbdb20df95298437981458da0f5de4e0fcc19652c72346f6a71f55dfe53f8.scope - libcontainer container b36cbdb20df95298437981458da0f5de4e0fcc19652c72346f6a71f55dfe53f8. Feb 13 15:36:27.106185 kubelet[2145]: W0213 15:36:27.105494 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Feb 13 15:36:27.106185 kubelet[2145]: E0213 15:36:27.105571 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:36:27.106636 containerd[1470]: time="2025-02-13T15:36:27.106590735Z" level=info msg="StartContainer for \"f654fcb7b877ffa6d9b6459ba1f00078b40cb7e0bc6b0e5c044bee4152496719\" returns successfully" Feb 13 15:36:27.106880 containerd[1470]: time="2025-02-13T15:36:27.106710264Z" level=info msg="StartContainer for \"b36cbdb20df95298437981458da0f5de4e0fcc19652c72346f6a71f55dfe53f8\" returns successfully" Feb 13 15:36:27.106908 containerd[1470]: time="2025-02-13T15:36:27.106892175Z" level=info msg="StartContainer for \"7dfdc2db0f5322e8b20b633c7cbab28be5662956299d7815ffc5a32170b056de\" returns successfully" Feb 13 15:36:27.165563 kubelet[2145]: W0213 15:36:27.165490 2145 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Feb 13 15:36:27.165563 kubelet[2145]: E0213 15:36:27.165559 2145 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:36:27.515121 kubelet[2145]: I0213 15:36:27.514815 2145 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:36:27.917338 kubelet[2145]: E0213 15:36:27.916555 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:27.918377 kubelet[2145]: E0213 15:36:27.918352 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:27.923725 kubelet[2145]: E0213 15:36:27.923652 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:28.850201 kubelet[2145]: E0213 15:36:28.850158 2145 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:36:28.883309 kubelet[2145]: I0213 15:36:28.883272 2145 apiserver.go:52] "Watching apiserver" Feb 13 15:36:28.925295 kubelet[2145]: E0213 15:36:28.925265 2145 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:28.976939 kubelet[2145]: E0213 15:36:28.976819 2145 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823ce8cf46b57c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:36:25.88656224 +0000 UTC m=+0.790579607,LastTimestamp:2025-02-13 15:36:25.88656224 +0000 UTC m=+0.790579607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:36:28.978097 kubelet[2145]: I0213 15:36:28.978062 2145 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 15:36:28.992104 kubelet[2145]: I0213 15:36:28.992051 2145 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:36:29.031354 kubelet[2145]: E0213 15:36:29.031232 2145 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823ce8cf4e72ab6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:36:25.894677174 +0000 UTC m=+0.798694501,LastTimestamp:2025-02-13 15:36:25.894677174 +0000 UTC m=+0.798694501,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:36:29.084995 kubelet[2145]: E0213 15:36:29.084898 2145 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823ce8cf5b3b525 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:36:25.908081957 +0000 UTC m=+0.812099324,LastTimestamp:2025-02-13 15:36:25.908081957 +0000 UTC m=+0.812099324,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:36:31.052736 systemd[1]: Reloading requested from client PID 2423 ('systemctl') (unit session-5.scope)... Feb 13 15:36:31.053188 systemd[1]: Reloading... Feb 13 15:36:31.115351 zram_generator::config[2463]: No configuration found. Feb 13 15:36:31.279171 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:36:31.343727 systemd[1]: Reloading finished in 290 ms. Feb 13 15:36:31.373972 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:31.388383 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:36:31.388632 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:31.388700 systemd[1]: kubelet.service: Consumed 1.218s CPU time, 118.0M memory peak, 0B memory swap peak. Feb 13 15:36:31.398723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:36:31.493249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:36:31.498293 (kubelet)[2504]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:36:31.536590 kubelet[2504]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:36:31.536590 kubelet[2504]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:36:31.536590 kubelet[2504]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:36:31.536956 kubelet[2504]: I0213 15:36:31.536602 2504 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:36:31.543286 kubelet[2504]: I0213 15:36:31.543232 2504 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:36:31.543286 kubelet[2504]: I0213 15:36:31.543272 2504 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:36:31.543567 kubelet[2504]: I0213 15:36:31.543536 2504 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:36:31.545002 kubelet[2504]: I0213 15:36:31.544969 2504 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:36:31.548706 kubelet[2504]: I0213 15:36:31.548680 2504 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:36:31.551599 kubelet[2504]: E0213 15:36:31.551520 2504 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:36:31.551599 kubelet[2504]: I0213 15:36:31.551556 2504 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:36:31.554117 kubelet[2504]: I0213 15:36:31.554094 2504 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:36:31.554573 kubelet[2504]: I0213 15:36:31.554534 2504 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:36:31.554885 kubelet[2504]: I0213 15:36:31.554772 2504 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:36:31.555413 kubelet[2504]: I0213 15:36:31.554815 2504 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:36:31.555534 kubelet[2504]: I0213 15:36:31.555431 2504 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:36:31.555534 kubelet[2504]: I0213 15:36:31.555446 2504 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:36:31.555534 kubelet[2504]: I0213 15:36:31.555489 2504 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:36:31.555682 kubelet[2504]: I0213 15:36:31.555609 2504 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:36:31.555682 kubelet[2504]: I0213 15:36:31.555627 2504 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:36:31.555682 kubelet[2504]: I0213 15:36:31.555658 2504 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:36:31.555682 kubelet[2504]: I0213 15:36:31.555674 2504 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:36:31.556925 kubelet[2504]: I0213 15:36:31.556904 2504 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:36:31.557707 kubelet[2504]: I0213 15:36:31.557689 2504 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:36:31.559680 kubelet[2504]: I0213 15:36:31.559641 2504 server.go:1269] "Started kubelet" Feb 13 15:36:31.560423 kubelet[2504]: I0213 15:36:31.560358 2504 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:36:31.561043 kubelet[2504]: I0213 15:36:31.560989 2504 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:36:31.561481 kubelet[2504]: I0213 15:36:31.561464 2504 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:36:31.564245 kubelet[2504]: I0213 15:36:31.564218 2504 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:36:31.564528 kubelet[2504]: I0213 15:36:31.564513 2504 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:36:31.565459 kubelet[2504]: I0213 15:36:31.565422 2504 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:36:31.566986 kubelet[2504]: E0213 15:36:31.566925 2504 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:36:31.566986 kubelet[2504]: I0213 15:36:31.566973 2504 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:36:31.567189 kubelet[2504]: I0213 15:36:31.567137 2504 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:36:31.567405 kubelet[2504]: I0213 15:36:31.567266 2504 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:36:31.569064 kubelet[2504]: I0213 15:36:31.568403 2504 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:36:31.569064 kubelet[2504]: I0213 15:36:31.568525 2504 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:36:31.569727 kubelet[2504]: I0213 15:36:31.569698 2504 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:36:31.575424 kubelet[2504]: E0213 15:36:31.575380 2504 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:36:31.582099 kubelet[2504]: I0213 15:36:31.582061 2504 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:36:31.584171 kubelet[2504]: I0213 15:36:31.584141 2504 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:36:31.584171 kubelet[2504]: I0213 15:36:31.584172 2504 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:36:31.584231 kubelet[2504]: I0213 15:36:31.584201 2504 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:36:31.584273 kubelet[2504]: E0213 15:36:31.584249 2504 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:36:31.618250 kubelet[2504]: I0213 15:36:31.618144 2504 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:36:31.618250 kubelet[2504]: I0213 15:36:31.618169 2504 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:36:31.618250 kubelet[2504]: I0213 15:36:31.618191 2504 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:36:31.619162 kubelet[2504]: I0213 15:36:31.619128 2504 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:36:31.619197 kubelet[2504]: I0213 15:36:31.619156 2504 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:36:31.619197 kubelet[2504]: I0213 15:36:31.619177 2504 policy_none.go:49] "None policy: Start" Feb 13 15:36:31.619931 kubelet[2504]: I0213 15:36:31.619909 2504 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:36:31.620013 kubelet[2504]: I0213 15:36:31.619940 2504 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:36:31.620148 kubelet[2504]: I0213 15:36:31.620131 2504 state_mem.go:75] "Updated machine memory state" Feb 13 15:36:31.623847 kubelet[2504]: I0213 15:36:31.623821 2504 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:36:31.624422 kubelet[2504]: I0213 15:36:31.624398 2504 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:36:31.624546 kubelet[2504]: I0213 15:36:31.624508 2504 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:36:31.625472 kubelet[2504]: I0213 15:36:31.625447 2504 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:36:31.729410 kubelet[2504]: I0213 15:36:31.729028 2504 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:36:31.737684 kubelet[2504]: I0213 15:36:31.737644 2504 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Feb 13 15:36:31.737929 kubelet[2504]: I0213 15:36:31.737740 2504 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 15:36:31.768731 kubelet[2504]: I0213 15:36:31.768686 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:31.768731 kubelet[2504]: I0213 15:36:31.768727 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:31.768897 kubelet[2504]: I0213 15:36:31.768750 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:31.768897 kubelet[2504]: I0213 15:36:31.768770 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e40766202139838693e08ce69a9f7dbf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e40766202139838693e08ce69a9f7dbf\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:36:31.768897 kubelet[2504]: I0213 15:36:31.768786 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e40766202139838693e08ce69a9f7dbf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e40766202139838693e08ce69a9f7dbf\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:36:31.768897 kubelet[2504]: I0213 15:36:31.768803 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:31.768897 kubelet[2504]: I0213 15:36:31.768817 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:36:31.769001 kubelet[2504]: I0213 15:36:31.768846 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:36:31.769001 kubelet[2504]: I0213 15:36:31.768860 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e40766202139838693e08ce69a9f7dbf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e40766202139838693e08ce69a9f7dbf\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:36:32.033652 kubelet[2504]: E0213 15:36:32.033453 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:32.035610 kubelet[2504]: E0213 15:36:32.035524 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:32.035610 kubelet[2504]: E0213 15:36:32.035586 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:32.557217 kubelet[2504]: I0213 15:36:32.557171 2504 apiserver.go:52] "Watching apiserver" Feb 13 15:36:32.568140 kubelet[2504]: I0213 15:36:32.568100 2504 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:36:32.605730 kubelet[2504]: E0213 15:36:32.605658 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:32.606696 kubelet[2504]: E0213 15:36:32.605909 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:32.615930 kubelet[2504]: E0213 15:36:32.613230 2504 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:36:32.615930 kubelet[2504]: E0213 15:36:32.613433 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:32.638838 kubelet[2504]: I0213 15:36:32.638622 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6385968050000002 podStartE2EDuration="1.638596805s" podCreationTimestamp="2025-02-13 15:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:36:32.638560622 +0000 UTC m=+1.137043583" watchObservedRunningTime="2025-02-13 15:36:32.638596805 +0000 UTC m=+1.137079766" Feb 13 15:36:32.720124 kubelet[2504]: I0213 15:36:32.719471 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.71945283 podStartE2EDuration="1.71945283s" podCreationTimestamp="2025-02-13 15:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:36:32.719407932 +0000 UTC m=+1.217890893" watchObservedRunningTime="2025-02-13 15:36:32.71945283 +0000 UTC m=+1.217935791" Feb 13 15:36:32.720263 kubelet[2504]: I0213 15:36:32.720204 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.720193957 podStartE2EDuration="1.720193957s" podCreationTimestamp="2025-02-13 15:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:36:32.688962536 +0000 UTC m=+1.187445497" watchObservedRunningTime="2025-02-13 15:36:32.720193957 +0000 UTC m=+1.218676918" Feb 13 15:36:32.910221 sudo[1608]: pam_unix(sudo:session): session closed for user root Feb 13 15:36:32.911605 sshd[1607]: Connection closed by 10.0.0.1 port 57148 Feb 13 15:36:32.912067 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:32.915904 systemd[1]: sshd@4-10.0.0.129:22-10.0.0.1:57148.service: Deactivated successfully. Feb 13 15:36:32.917954 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:36:32.918145 systemd[1]: session-5.scope: Consumed 7.730s CPU time, 156.3M memory peak, 0B memory swap peak. Feb 13 15:36:32.918725 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:36:32.919870 systemd-logind[1452]: Removed session 5. Feb 13 15:36:33.606505 kubelet[2504]: E0213 15:36:33.606474 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:33.606954 kubelet[2504]: E0213 15:36:33.606523 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:36.294137 kubelet[2504]: E0213 15:36:36.294050 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:38.088144 kubelet[2504]: I0213 15:36:38.088056 2504 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:36:38.088537 containerd[1470]: time="2025-02-13T15:36:38.088430177Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:36:38.089513 kubelet[2504]: I0213 15:36:38.088893 2504 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:36:38.321328 kubelet[2504]: E0213 15:36:38.318121 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:38.616894 kubelet[2504]: E0213 15:36:38.616843 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:38.851074 systemd[1]: Created slice kubepods-besteffort-poda6edf78b_d02d_464f_afa4_22a36a134dc3.slice - libcontainer container kubepods-besteffort-poda6edf78b_d02d_464f_afa4_22a36a134dc3.slice. Feb 13 15:36:38.865837 systemd[1]: Created slice kubepods-burstable-podb06de996_c126_4bd1_88d4_10200ab32b2c.slice - libcontainer container kubepods-burstable-podb06de996_c126_4bd1_88d4_10200ab32b2c.slice. Feb 13 15:36:38.918874 kubelet[2504]: I0213 15:36:38.918499 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk9jh\" (UniqueName: \"kubernetes.io/projected/b06de996-c126-4bd1-88d4-10200ab32b2c-kube-api-access-tk9jh\") pod \"kube-flannel-ds-fzl6x\" (UID: \"b06de996-c126-4bd1-88d4-10200ab32b2c\") " pod="kube-flannel/kube-flannel-ds-fzl6x" Feb 13 15:36:38.918874 kubelet[2504]: I0213 15:36:38.918558 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6edf78b-d02d-464f-afa4-22a36a134dc3-lib-modules\") pod \"kube-proxy-rswpf\" (UID: \"a6edf78b-d02d-464f-afa4-22a36a134dc3\") " pod="kube-system/kube-proxy-rswpf" Feb 13 15:36:38.918874 kubelet[2504]: I0213 15:36:38.918579 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/b06de996-c126-4bd1-88d4-10200ab32b2c-cni\") pod \"kube-flannel-ds-fzl6x\" (UID: \"b06de996-c126-4bd1-88d4-10200ab32b2c\") " pod="kube-flannel/kube-flannel-ds-fzl6x" Feb 13 15:36:38.918874 kubelet[2504]: I0213 15:36:38.918596 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/b06de996-c126-4bd1-88d4-10200ab32b2c-flannel-cfg\") pod \"kube-flannel-ds-fzl6x\" (UID: \"b06de996-c126-4bd1-88d4-10200ab32b2c\") " pod="kube-flannel/kube-flannel-ds-fzl6x" Feb 13 15:36:38.918874 kubelet[2504]: I0213 15:36:38.918612 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-282hg\" (UniqueName: \"kubernetes.io/projected/a6edf78b-d02d-464f-afa4-22a36a134dc3-kube-api-access-282hg\") pod \"kube-proxy-rswpf\" (UID: \"a6edf78b-d02d-464f-afa4-22a36a134dc3\") " pod="kube-system/kube-proxy-rswpf" Feb 13 15:36:38.919087 kubelet[2504]: I0213 15:36:38.918628 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b06de996-c126-4bd1-88d4-10200ab32b2c-run\") pod \"kube-flannel-ds-fzl6x\" (UID: \"b06de996-c126-4bd1-88d4-10200ab32b2c\") " pod="kube-flannel/kube-flannel-ds-fzl6x" Feb 13 15:36:38.919087 kubelet[2504]: I0213 15:36:38.918650 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/b06de996-c126-4bd1-88d4-10200ab32b2c-cni-plugin\") pod \"kube-flannel-ds-fzl6x\" (UID: \"b06de996-c126-4bd1-88d4-10200ab32b2c\") " pod="kube-flannel/kube-flannel-ds-fzl6x" Feb 13 15:36:38.919087 kubelet[2504]: I0213 15:36:38.918665 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a6edf78b-d02d-464f-afa4-22a36a134dc3-kube-proxy\") pod \"kube-proxy-rswpf\" (UID: \"a6edf78b-d02d-464f-afa4-22a36a134dc3\") " pod="kube-system/kube-proxy-rswpf" Feb 13 15:36:38.919087 kubelet[2504]: I0213 15:36:38.918679 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6edf78b-d02d-464f-afa4-22a36a134dc3-xtables-lock\") pod \"kube-proxy-rswpf\" (UID: \"a6edf78b-d02d-464f-afa4-22a36a134dc3\") " pod="kube-system/kube-proxy-rswpf" Feb 13 15:36:38.919087 kubelet[2504]: I0213 15:36:38.918693 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b06de996-c126-4bd1-88d4-10200ab32b2c-xtables-lock\") pod \"kube-flannel-ds-fzl6x\" (UID: \"b06de996-c126-4bd1-88d4-10200ab32b2c\") " pod="kube-flannel/kube-flannel-ds-fzl6x" Feb 13 15:36:39.161327 kubelet[2504]: E0213 15:36:39.161210 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:39.162954 containerd[1470]: time="2025-02-13T15:36:39.162888020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rswpf,Uid:a6edf78b-d02d-464f-afa4-22a36a134dc3,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:39.172424 kubelet[2504]: E0213 15:36:39.172298 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:39.174944 containerd[1470]: time="2025-02-13T15:36:39.174605154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-fzl6x,Uid:b06de996-c126-4bd1-88d4-10200ab32b2c,Namespace:kube-flannel,Attempt:0,}" Feb 13 15:36:39.183450 containerd[1470]: time="2025-02-13T15:36:39.183353152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:39.183450 containerd[1470]: time="2025-02-13T15:36:39.183411435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:39.183450 containerd[1470]: time="2025-02-13T15:36:39.183424155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:39.183867 containerd[1470]: time="2025-02-13T15:36:39.183827574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:39.198675 containerd[1470]: time="2025-02-13T15:36:39.198165507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:39.198675 containerd[1470]: time="2025-02-13T15:36:39.198238950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:39.200143 containerd[1470]: time="2025-02-13T15:36:39.198836698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:39.200143 containerd[1470]: time="2025-02-13T15:36:39.198962063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:39.207195 systemd[1]: Started cri-containerd-95d546c7b83f6f821295222f6c519d4104d56d0a1b10207870a39cc91880c197.scope - libcontainer container 95d546c7b83f6f821295222f6c519d4104d56d0a1b10207870a39cc91880c197. Feb 13 15:36:39.216768 systemd[1]: Started cri-containerd-95613e77f7f8370de7895a8cded6d1c3f378286d632b59f4f7f042fc714d116a.scope - libcontainer container 95613e77f7f8370de7895a8cded6d1c3f378286d632b59f4f7f042fc714d116a. Feb 13 15:36:39.236352 containerd[1470]: time="2025-02-13T15:36:39.235967550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rswpf,Uid:a6edf78b-d02d-464f-afa4-22a36a134dc3,Namespace:kube-system,Attempt:0,} returns sandbox id \"95d546c7b83f6f821295222f6c519d4104d56d0a1b10207870a39cc91880c197\"" Feb 13 15:36:39.238405 kubelet[2504]: E0213 15:36:39.237179 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:39.241041 containerd[1470]: time="2025-02-13T15:36:39.241000579Z" level=info msg="CreateContainer within sandbox \"95d546c7b83f6f821295222f6c519d4104d56d0a1b10207870a39cc91880c197\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:36:39.256604 containerd[1470]: time="2025-02-13T15:36:39.256554968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-fzl6x,Uid:b06de996-c126-4bd1-88d4-10200ab32b2c,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"95613e77f7f8370de7895a8cded6d1c3f378286d632b59f4f7f042fc714d116a\"" Feb 13 15:36:39.257487 kubelet[2504]: E0213 15:36:39.257463 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:39.258486 containerd[1470]: time="2025-02-13T15:36:39.258447334Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 15:36:39.259185 containerd[1470]: time="2025-02-13T15:36:39.259055722Z" level=info msg="CreateContainer within sandbox \"95d546c7b83f6f821295222f6c519d4104d56d0a1b10207870a39cc91880c197\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"808d8d8440758770624e1b7e0244eaee941c4d742c78afed71e58016ee294b9e\"" Feb 13 15:36:39.260417 containerd[1470]: time="2025-02-13T15:36:39.259720352Z" level=info msg="StartContainer for \"808d8d8440758770624e1b7e0244eaee941c4d742c78afed71e58016ee294b9e\"" Feb 13 15:36:39.291549 systemd[1]: Started cri-containerd-808d8d8440758770624e1b7e0244eaee941c4d742c78afed71e58016ee294b9e.scope - libcontainer container 808d8d8440758770624e1b7e0244eaee941c4d742c78afed71e58016ee294b9e. Feb 13 15:36:39.320413 containerd[1470]: time="2025-02-13T15:36:39.319845891Z" level=info msg="StartContainer for \"808d8d8440758770624e1b7e0244eaee941c4d742c78afed71e58016ee294b9e\" returns successfully" Feb 13 15:36:39.616783 kubelet[2504]: E0213 15:36:39.616752 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:39.627058 kubelet[2504]: I0213 15:36:39.626937 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rswpf" podStartSLOduration=1.626920843 podStartE2EDuration="1.626920843s" podCreationTimestamp="2025-02-13 15:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:36:39.626808318 +0000 UTC m=+8.125291279" watchObservedRunningTime="2025-02-13 15:36:39.626920843 +0000 UTC m=+8.125403804" Feb 13 15:36:40.523383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4003833464.mount: Deactivated successfully. Feb 13 15:36:40.553197 containerd[1470]: time="2025-02-13T15:36:40.553145626Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:40.553685 containerd[1470]: time="2025-02-13T15:36:40.553637168Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 15:36:40.554330 containerd[1470]: time="2025-02-13T15:36:40.554270115Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:40.556603 containerd[1470]: time="2025-02-13T15:36:40.556560334Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:40.558086 containerd[1470]: time="2025-02-13T15:36:40.557902032Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.299419216s" Feb 13 15:36:40.558086 containerd[1470]: time="2025-02-13T15:36:40.557933793Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 15:36:40.560398 containerd[1470]: time="2025-02-13T15:36:40.560351017Z" level=info msg="CreateContainer within sandbox \"95613e77f7f8370de7895a8cded6d1c3f378286d632b59f4f7f042fc714d116a\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 15:36:40.570489 containerd[1470]: time="2025-02-13T15:36:40.570441852Z" level=info msg="CreateContainer within sandbox \"95613e77f7f8370de7895a8cded6d1c3f378286d632b59f4f7f042fc714d116a\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"f6293b931cc9945da9e121d38cb317d89ea8b746e0636d7c25d3c4d9244c1bc8\"" Feb 13 15:36:40.571363 containerd[1470]: time="2025-02-13T15:36:40.570892672Z" level=info msg="StartContainer for \"f6293b931cc9945da9e121d38cb317d89ea8b746e0636d7c25d3c4d9244c1bc8\"" Feb 13 15:36:40.595874 systemd[1]: Started cri-containerd-f6293b931cc9945da9e121d38cb317d89ea8b746e0636d7c25d3c4d9244c1bc8.scope - libcontainer container f6293b931cc9945da9e121d38cb317d89ea8b746e0636d7c25d3c4d9244c1bc8. Feb 13 15:36:40.622349 containerd[1470]: time="2025-02-13T15:36:40.622279369Z" level=info msg="StartContainer for \"f6293b931cc9945da9e121d38cb317d89ea8b746e0636d7c25d3c4d9244c1bc8\" returns successfully" Feb 13 15:36:40.626170 systemd[1]: cri-containerd-f6293b931cc9945da9e121d38cb317d89ea8b746e0636d7c25d3c4d9244c1bc8.scope: Deactivated successfully. Feb 13 15:36:40.666194 containerd[1470]: time="2025-02-13T15:36:40.666137420Z" level=info msg="shim disconnected" id=f6293b931cc9945da9e121d38cb317d89ea8b746e0636d7c25d3c4d9244c1bc8 namespace=k8s.io Feb 13 15:36:40.666194 containerd[1470]: time="2025-02-13T15:36:40.666188383Z" level=warning msg="cleaning up after shim disconnected" id=f6293b931cc9945da9e121d38cb317d89ea8b746e0636d7c25d3c4d9244c1bc8 namespace=k8s.io Feb 13 15:36:40.666194 containerd[1470]: time="2025-02-13T15:36:40.666196463Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:36:41.626916 kubelet[2504]: E0213 15:36:41.626810 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:41.628077 containerd[1470]: time="2025-02-13T15:36:41.627886361Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 15:36:42.832089 kubelet[2504]: E0213 15:36:42.832055 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:43.032530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2291430829.mount: Deactivated successfully. Feb 13 15:36:43.520358 containerd[1470]: time="2025-02-13T15:36:43.519708462Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:43.520778 containerd[1470]: time="2025-02-13T15:36:43.520732020Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874260" Feb 13 15:36:43.521400 containerd[1470]: time="2025-02-13T15:36:43.521365603Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:43.524152 containerd[1470]: time="2025-02-13T15:36:43.524115824Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:36:43.525223 containerd[1470]: time="2025-02-13T15:36:43.525192984Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.89724918s" Feb 13 15:36:43.525278 containerd[1470]: time="2025-02-13T15:36:43.525226465Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 15:36:43.528269 containerd[1470]: time="2025-02-13T15:36:43.528116211Z" level=info msg="CreateContainer within sandbox \"95613e77f7f8370de7895a8cded6d1c3f378286d632b59f4f7f042fc714d116a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:36:43.537601 containerd[1470]: time="2025-02-13T15:36:43.537551118Z" level=info msg="CreateContainer within sandbox \"95613e77f7f8370de7895a8cded6d1c3f378286d632b59f4f7f042fc714d116a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3bf311ddb5e768e98231007e109edc3577f368b47677017024121fd4e1ecff7d\"" Feb 13 15:36:43.538257 containerd[1470]: time="2025-02-13T15:36:43.538035655Z" level=info msg="StartContainer for \"3bf311ddb5e768e98231007e109edc3577f368b47677017024121fd4e1ecff7d\"" Feb 13 15:36:43.566508 systemd[1]: Started cri-containerd-3bf311ddb5e768e98231007e109edc3577f368b47677017024121fd4e1ecff7d.scope - libcontainer container 3bf311ddb5e768e98231007e109edc3577f368b47677017024121fd4e1ecff7d. Feb 13 15:36:43.594527 systemd[1]: cri-containerd-3bf311ddb5e768e98231007e109edc3577f368b47677017024121fd4e1ecff7d.scope: Deactivated successfully. Feb 13 15:36:43.601517 containerd[1470]: time="2025-02-13T15:36:43.601468105Z" level=info msg="StartContainer for \"3bf311ddb5e768e98231007e109edc3577f368b47677017024121fd4e1ecff7d\" returns successfully" Feb 13 15:36:43.623011 kubelet[2504]: I0213 15:36:43.622981 2504 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 15:36:43.637404 kubelet[2504]: E0213 15:36:43.636854 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:43.651981 kubelet[2504]: I0213 15:36:43.651540 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a94e7379-499e-4451-92a3-455330d73b9a-config-volume\") pod \"coredns-6f6b679f8f-7t8kp\" (UID: \"a94e7379-499e-4451-92a3-455330d73b9a\") " pod="kube-system/coredns-6f6b679f8f-7t8kp" Feb 13 15:36:43.654219 kubelet[2504]: I0213 15:36:43.654198 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knv4h\" (UniqueName: \"kubernetes.io/projected/a94e7379-499e-4451-92a3-455330d73b9a-kube-api-access-knv4h\") pod \"coredns-6f6b679f8f-7t8kp\" (UID: \"a94e7379-499e-4451-92a3-455330d73b9a\") " pod="kube-system/coredns-6f6b679f8f-7t8kp" Feb 13 15:36:43.659793 systemd[1]: Created slice kubepods-burstable-poda94e7379_499e_4451_92a3_455330d73b9a.slice - libcontainer container kubepods-burstable-poda94e7379_499e_4451_92a3_455330d73b9a.slice. Feb 13 15:36:43.664627 systemd[1]: Created slice kubepods-burstable-pod475409ed_56fa_47af_b267_6bfd38371506.slice - libcontainer container kubepods-burstable-pod475409ed_56fa_47af_b267_6bfd38371506.slice. Feb 13 15:36:43.700594 containerd[1470]: time="2025-02-13T15:36:43.700420939Z" level=info msg="shim disconnected" id=3bf311ddb5e768e98231007e109edc3577f368b47677017024121fd4e1ecff7d namespace=k8s.io Feb 13 15:36:43.700594 containerd[1470]: time="2025-02-13T15:36:43.700477021Z" level=warning msg="cleaning up after shim disconnected" id=3bf311ddb5e768e98231007e109edc3577f368b47677017024121fd4e1ecff7d namespace=k8s.io Feb 13 15:36:43.700594 containerd[1470]: time="2025-02-13T15:36:43.700484781Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:36:43.755568 kubelet[2504]: I0213 15:36:43.755499 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/475409ed-56fa-47af-b267-6bfd38371506-config-volume\") pod \"coredns-6f6b679f8f-jthk8\" (UID: \"475409ed-56fa-47af-b267-6bfd38371506\") " pod="kube-system/coredns-6f6b679f8f-jthk8" Feb 13 15:36:43.755731 kubelet[2504]: I0213 15:36:43.755595 2504 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7vfm\" (UniqueName: \"kubernetes.io/projected/475409ed-56fa-47af-b267-6bfd38371506-kube-api-access-l7vfm\") pod \"coredns-6f6b679f8f-jthk8\" (UID: \"475409ed-56fa-47af-b267-6bfd38371506\") " pod="kube-system/coredns-6f6b679f8f-jthk8" Feb 13 15:36:43.952764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bf311ddb5e768e98231007e109edc3577f368b47677017024121fd4e1ecff7d-rootfs.mount: Deactivated successfully. Feb 13 15:36:43.963851 kubelet[2504]: E0213 15:36:43.963810 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:43.964453 containerd[1470]: time="2025-02-13T15:36:43.964421594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7t8kp,Uid:a94e7379-499e-4451-92a3-455330d73b9a,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:43.970261 kubelet[2504]: E0213 15:36:43.970226 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:43.971872 containerd[1470]: time="2025-02-13T15:36:43.971835786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jthk8,Uid:475409ed-56fa-47af-b267-6bfd38371506,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:44.034520 containerd[1470]: time="2025-02-13T15:36:44.034448711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7t8kp,Uid:a94e7379-499e-4451-92a3-455330d73b9a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd3d0a82568f67c0f9d361a9957917d11f83a3cdda6002f9ef6c5e48a7680283\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:36:44.034750 kubelet[2504]: E0213 15:36:44.034707 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd3d0a82568f67c0f9d361a9957917d11f83a3cdda6002f9ef6c5e48a7680283\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:36:44.034818 kubelet[2504]: E0213 15:36:44.034795 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd3d0a82568f67c0f9d361a9957917d11f83a3cdda6002f9ef6c5e48a7680283\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-7t8kp" Feb 13 15:36:44.034818 kubelet[2504]: E0213 15:36:44.034817 2504 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd3d0a82568f67c0f9d361a9957917d11f83a3cdda6002f9ef6c5e48a7680283\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-7t8kp" Feb 13 15:36:44.034908 kubelet[2504]: E0213 15:36:44.034883 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-7t8kp_kube-system(a94e7379-499e-4451-92a3-455330d73b9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-7t8kp_kube-system(a94e7379-499e-4451-92a3-455330d73b9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd3d0a82568f67c0f9d361a9957917d11f83a3cdda6002f9ef6c5e48a7680283\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-7t8kp" podUID="a94e7379-499e-4451-92a3-455330d73b9a" Feb 13 15:36:44.035844 containerd[1470]: time="2025-02-13T15:36:44.035813319Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jthk8,Uid:475409ed-56fa-47af-b267-6bfd38371506,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f5f9634423176ee835d527dcb733cda25ea4b5b50aa81a39cc52086772767da\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:36:44.036020 kubelet[2504]: E0213 15:36:44.035995 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f5f9634423176ee835d527dcb733cda25ea4b5b50aa81a39cc52086772767da\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:36:44.036056 kubelet[2504]: E0213 15:36:44.036036 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f5f9634423176ee835d527dcb733cda25ea4b5b50aa81a39cc52086772767da\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-jthk8" Feb 13 15:36:44.036079 kubelet[2504]: E0213 15:36:44.036053 2504 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f5f9634423176ee835d527dcb733cda25ea4b5b50aa81a39cc52086772767da\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-jthk8" Feb 13 15:36:44.036108 kubelet[2504]: E0213 15:36:44.036083 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-jthk8_kube-system(475409ed-56fa-47af-b267-6bfd38371506)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-jthk8_kube-system(475409ed-56fa-47af-b267-6bfd38371506)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f5f9634423176ee835d527dcb733cda25ea4b5b50aa81a39cc52086772767da\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-jthk8" podUID="475409ed-56fa-47af-b267-6bfd38371506" Feb 13 15:36:44.639031 kubelet[2504]: E0213 15:36:44.638989 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:44.641681 containerd[1470]: time="2025-02-13T15:36:44.641646871Z" level=info msg="CreateContainer within sandbox \"95613e77f7f8370de7895a8cded6d1c3f378286d632b59f4f7f042fc714d116a\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 15:36:44.655331 containerd[1470]: time="2025-02-13T15:36:44.653090750Z" level=info msg="CreateContainer within sandbox \"95613e77f7f8370de7895a8cded6d1c3f378286d632b59f4f7f042fc714d116a\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"9fb74adf2fd4da47a914ee95b1c68b130d46c5cdf36c82e1e2b43ba20b36b76e\"" Feb 13 15:36:44.656183 containerd[1470]: time="2025-02-13T15:36:44.656149376Z" level=info msg="StartContainer for \"9fb74adf2fd4da47a914ee95b1c68b130d46c5cdf36c82e1e2b43ba20b36b76e\"" Feb 13 15:36:44.679503 systemd[1]: Started cri-containerd-9fb74adf2fd4da47a914ee95b1c68b130d46c5cdf36c82e1e2b43ba20b36b76e.scope - libcontainer container 9fb74adf2fd4da47a914ee95b1c68b130d46c5cdf36c82e1e2b43ba20b36b76e. Feb 13 15:36:44.711119 containerd[1470]: time="2025-02-13T15:36:44.711063650Z" level=info msg="StartContainer for \"9fb74adf2fd4da47a914ee95b1c68b130d46c5cdf36c82e1e2b43ba20b36b76e\" returns successfully" Feb 13 15:36:44.951833 systemd[1]: run-netns-cni\x2d31b08956\x2dc5dc\x2dcc86\x2d2fd4\x2d7ad3ef49d203.mount: Deactivated successfully. Feb 13 15:36:44.951932 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd3d0a82568f67c0f9d361a9957917d11f83a3cdda6002f9ef6c5e48a7680283-shm.mount: Deactivated successfully. Feb 13 15:36:44.951994 systemd[1]: run-netns-cni\x2da752e698\x2d1f42\x2dd087\x2d2cc4\x2dd9f0e38636ab.mount: Deactivated successfully. Feb 13 15:36:44.952036 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f5f9634423176ee835d527dcb733cda25ea4b5b50aa81a39cc52086772767da-shm.mount: Deactivated successfully. Feb 13 15:36:45.642786 kubelet[2504]: E0213 15:36:45.642744 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:45.652923 kubelet[2504]: I0213 15:36:45.652785 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-fzl6x" podStartSLOduration=3.384408647 podStartE2EDuration="7.652767721s" podCreationTimestamp="2025-02-13 15:36:38 +0000 UTC" firstStartedPulling="2025-02-13 15:36:39.25792079 +0000 UTC m=+7.756403751" lastFinishedPulling="2025-02-13 15:36:43.526279864 +0000 UTC m=+12.024762825" observedRunningTime="2025-02-13 15:36:45.652363988 +0000 UTC m=+14.150846949" watchObservedRunningTime="2025-02-13 15:36:45.652767721 +0000 UTC m=+14.151250722" Feb 13 15:36:45.794058 systemd-networkd[1386]: flannel.1: Link UP Feb 13 15:36:45.794065 systemd-networkd[1386]: flannel.1: Gained carrier Feb 13 15:36:46.062770 update_engine[1457]: I20250213 15:36:46.062695 1457 update_attempter.cc:509] Updating boot flags... Feb 13 15:36:46.082388 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3151) Feb 13 15:36:46.113188 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3153) Feb 13 15:36:46.300880 kubelet[2504]: E0213 15:36:46.300833 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:46.644683 kubelet[2504]: E0213 15:36:46.644644 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:47.469666 systemd-networkd[1386]: flannel.1: Gained IPv6LL Feb 13 15:36:56.244302 systemd[1]: Started sshd@5-10.0.0.129:22-10.0.0.1:43972.service - OpenSSH per-connection server daemon (10.0.0.1:43972). Feb 13 15:36:56.286978 sshd[3201]: Accepted publickey for core from 10.0.0.1 port 43972 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:36:56.288236 sshd-session[3201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:56.292358 systemd-logind[1452]: New session 6 of user core. Feb 13 15:36:56.304480 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:36:56.421064 sshd[3203]: Connection closed by 10.0.0.1 port 43972 Feb 13 15:36:56.421382 sshd-session[3201]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:56.424278 systemd[1]: sshd@5-10.0.0.129:22-10.0.0.1:43972.service: Deactivated successfully. Feb 13 15:36:56.425735 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:36:56.427514 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:36:56.428329 systemd-logind[1452]: Removed session 6. Feb 13 15:36:56.585518 kubelet[2504]: E0213 15:36:56.585488 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:56.585935 containerd[1470]: time="2025-02-13T15:36:56.585900683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jthk8,Uid:475409ed-56fa-47af-b267-6bfd38371506,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:56.609567 systemd-networkd[1386]: cni0: Link UP Feb 13 15:36:56.609573 systemd-networkd[1386]: cni0: Gained carrier Feb 13 15:36:56.611666 systemd-networkd[1386]: cni0: Lost carrier Feb 13 15:36:56.615818 systemd-networkd[1386]: veth659b4014: Link UP Feb 13 15:36:56.621995 kernel: cni0: port 1(veth659b4014) entered blocking state Feb 13 15:36:56.622080 kernel: cni0: port 1(veth659b4014) entered disabled state Feb 13 15:36:56.623812 kernel: veth659b4014: entered allmulticast mode Feb 13 15:36:56.623877 kernel: veth659b4014: entered promiscuous mode Feb 13 15:36:56.625120 kernel: cni0: port 1(veth659b4014) entered blocking state Feb 13 15:36:56.625167 kernel: cni0: port 1(veth659b4014) entered forwarding state Feb 13 15:36:56.626480 kernel: cni0: port 1(veth659b4014) entered disabled state Feb 13 15:36:56.633394 kernel: cni0: port 1(veth659b4014) entered blocking state Feb 13 15:36:56.633455 kernel: cni0: port 1(veth659b4014) entered forwarding state Feb 13 15:36:56.633363 systemd-networkd[1386]: veth659b4014: Gained carrier Feb 13 15:36:56.633819 systemd-networkd[1386]: cni0: Gained carrier Feb 13 15:36:56.635357 containerd[1470]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000014938), "name":"cbr0", "type":"bridge"} Feb 13 15:36:56.635357 containerd[1470]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:36:56.653312 containerd[1470]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T15:36:56.653223408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:56.653312 containerd[1470]: time="2025-02-13T15:36:56.653298369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:56.653464 containerd[1470]: time="2025-02-13T15:36:56.653338970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:56.653488 containerd[1470]: time="2025-02-13T15:36:56.653444812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:56.672480 systemd[1]: Started cri-containerd-eca269f848b993409e21a44ef0ffb69942a503054d3ee91c3a6f985e85e2dbb5.scope - libcontainer container eca269f848b993409e21a44ef0ffb69942a503054d3ee91c3a6f985e85e2dbb5. Feb 13 15:36:56.684214 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:36:56.701025 containerd[1470]: time="2025-02-13T15:36:56.700986388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jthk8,Uid:475409ed-56fa-47af-b267-6bfd38371506,Namespace:kube-system,Attempt:0,} returns sandbox id \"eca269f848b993409e21a44ef0ffb69942a503054d3ee91c3a6f985e85e2dbb5\"" Feb 13 15:36:56.701703 kubelet[2504]: E0213 15:36:56.701675 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:56.703730 containerd[1470]: time="2025-02-13T15:36:56.703666441Z" level=info msg="CreateContainer within sandbox \"eca269f848b993409e21a44ef0ffb69942a503054d3ee91c3a6f985e85e2dbb5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:36:56.717841 containerd[1470]: time="2025-02-13T15:36:56.717805919Z" level=info msg="CreateContainer within sandbox \"eca269f848b993409e21a44ef0ffb69942a503054d3ee91c3a6f985e85e2dbb5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b771b39cbcf76a782b52eb5a2439a9a5a4ac7d5f7733bcc3f3a9d72bc261dc72\"" Feb 13 15:36:56.719210 containerd[1470]: time="2025-02-13T15:36:56.718468932Z" level=info msg="StartContainer for \"b771b39cbcf76a782b52eb5a2439a9a5a4ac7d5f7733bcc3f3a9d72bc261dc72\"" Feb 13 15:36:56.747496 systemd[1]: Started cri-containerd-b771b39cbcf76a782b52eb5a2439a9a5a4ac7d5f7733bcc3f3a9d72bc261dc72.scope - libcontainer container b771b39cbcf76a782b52eb5a2439a9a5a4ac7d5f7733bcc3f3a9d72bc261dc72. Feb 13 15:36:56.781725 containerd[1470]: time="2025-02-13T15:36:56.781685976Z" level=info msg="StartContainer for \"b771b39cbcf76a782b52eb5a2439a9a5a4ac7d5f7733bcc3f3a9d72bc261dc72\" returns successfully" Feb 13 15:36:57.666495 kubelet[2504]: E0213 15:36:57.666438 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:57.687353 kubelet[2504]: I0213 15:36:57.685979 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-jthk8" podStartSLOduration=19.68596174 podStartE2EDuration="19.68596174s" podCreationTimestamp="2025-02-13 15:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:36:57.67592043 +0000 UTC m=+26.174403391" watchObservedRunningTime="2025-02-13 15:36:57.68596174 +0000 UTC m=+26.184444701" Feb 13 15:36:57.965433 systemd-networkd[1386]: veth659b4014: Gained IPv6LL Feb 13 15:36:58.093456 systemd-networkd[1386]: cni0: Gained IPv6LL Feb 13 15:36:58.585561 kubelet[2504]: E0213 15:36:58.585436 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:58.585844 containerd[1470]: time="2025-02-13T15:36:58.585810198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7t8kp,Uid:a94e7379-499e-4451-92a3-455330d73b9a,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:58.612621 systemd-networkd[1386]: vethc688f3e0: Link UP Feb 13 15:36:58.614720 kernel: cni0: port 2(vethc688f3e0) entered blocking state Feb 13 15:36:58.614804 kernel: cni0: port 2(vethc688f3e0) entered disabled state Feb 13 15:36:58.614824 kernel: vethc688f3e0: entered allmulticast mode Feb 13 15:36:58.615842 kernel: vethc688f3e0: entered promiscuous mode Feb 13 15:36:58.622200 systemd-networkd[1386]: vethc688f3e0: Gained carrier Feb 13 15:36:58.622411 kernel: cni0: port 2(vethc688f3e0) entered blocking state Feb 13 15:36:58.622439 kernel: cni0: port 2(vethc688f3e0) entered forwarding state Feb 13 15:36:58.623989 containerd[1470]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000012938), "name":"cbr0", "type":"bridge"} Feb 13 15:36:58.623989 containerd[1470]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:36:58.646340 containerd[1470]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T15:36:58.646160170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:58.646340 containerd[1470]: time="2025-02-13T15:36:58.646225652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:58.646340 containerd[1470]: time="2025-02-13T15:36:58.646240292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:58.646505 containerd[1470]: time="2025-02-13T15:36:58.646358814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:58.668078 kubelet[2504]: E0213 15:36:58.668042 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:58.668674 systemd[1]: Started cri-containerd-7c8b05b096c04063798a44c9a48db3a7796fde47c29955db397552a60d36f170.scope - libcontainer container 7c8b05b096c04063798a44c9a48db3a7796fde47c29955db397552a60d36f170. Feb 13 15:36:58.679783 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:36:58.701217 containerd[1470]: time="2025-02-13T15:36:58.701163166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7t8kp,Uid:a94e7379-499e-4451-92a3-455330d73b9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c8b05b096c04063798a44c9a48db3a7796fde47c29955db397552a60d36f170\"" Feb 13 15:36:58.702910 kubelet[2504]: E0213 15:36:58.702273 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:58.704129 containerd[1470]: time="2025-02-13T15:36:58.704026858Z" level=info msg="CreateContainer within sandbox \"7c8b05b096c04063798a44c9a48db3a7796fde47c29955db397552a60d36f170\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:36:58.717074 containerd[1470]: time="2025-02-13T15:36:58.717021814Z" level=info msg="CreateContainer within sandbox \"7c8b05b096c04063798a44c9a48db3a7796fde47c29955db397552a60d36f170\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ccc2b7774c6af7796cd9cedbf9e4d219c52a8941359bd81128b996262152edc\"" Feb 13 15:36:58.717938 containerd[1470]: time="2025-02-13T15:36:58.717880109Z" level=info msg="StartContainer for \"8ccc2b7774c6af7796cd9cedbf9e4d219c52a8941359bd81128b996262152edc\"" Feb 13 15:36:58.744492 systemd[1]: Started cri-containerd-8ccc2b7774c6af7796cd9cedbf9e4d219c52a8941359bd81128b996262152edc.scope - libcontainer container 8ccc2b7774c6af7796cd9cedbf9e4d219c52a8941359bd81128b996262152edc. Feb 13 15:36:58.767891 containerd[1470]: time="2025-02-13T15:36:58.767849094Z" level=info msg="StartContainer for \"8ccc2b7774c6af7796cd9cedbf9e4d219c52a8941359bd81128b996262152edc\" returns successfully" Feb 13 15:36:59.671007 kubelet[2504]: E0213 15:36:59.670554 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:59.693559 kubelet[2504]: I0213 15:36:59.691786 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-7t8kp" podStartSLOduration=21.691770576 podStartE2EDuration="21.691770576s" podCreationTimestamp="2025-02-13 15:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:36:59.680801865 +0000 UTC m=+28.179284826" watchObservedRunningTime="2025-02-13 15:36:59.691770576 +0000 UTC m=+28.190253537" Feb 13 15:36:59.949489 systemd-networkd[1386]: vethc688f3e0: Gained IPv6LL Feb 13 15:37:00.672485 kubelet[2504]: E0213 15:37:00.672422 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:37:01.439180 systemd[1]: Started sshd@6-10.0.0.129:22-10.0.0.1:43986.service - OpenSSH per-connection server daemon (10.0.0.1:43986). Feb 13 15:37:01.482211 sshd[3480]: Accepted publickey for core from 10.0.0.1 port 43986 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:01.483554 sshd-session[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:01.488504 systemd-logind[1452]: New session 7 of user core. Feb 13 15:37:01.503526 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:37:01.626547 sshd[3482]: Connection closed by 10.0.0.1 port 43986 Feb 13 15:37:01.626876 sshd-session[3480]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:01.632113 systemd[1]: sshd@6-10.0.0.129:22-10.0.0.1:43986.service: Deactivated successfully. Feb 13 15:37:01.634707 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:37:01.636403 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:37:01.637839 systemd-logind[1452]: Removed session 7. Feb 13 15:37:06.639533 systemd[1]: Started sshd@7-10.0.0.129:22-10.0.0.1:55926.service - OpenSSH per-connection server daemon (10.0.0.1:55926). Feb 13 15:37:06.682374 sshd[3518]: Accepted publickey for core from 10.0.0.1 port 55926 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:06.683027 sshd-session[3518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:06.686932 systemd-logind[1452]: New session 8 of user core. Feb 13 15:37:06.698497 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:37:06.814175 sshd[3520]: Connection closed by 10.0.0.1 port 55926 Feb 13 15:37:06.813440 sshd-session[3518]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:06.822720 systemd[1]: sshd@7-10.0.0.129:22-10.0.0.1:55926.service: Deactivated successfully. Feb 13 15:37:06.824098 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:37:06.825276 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:37:06.832558 systemd[1]: Started sshd@8-10.0.0.129:22-10.0.0.1:55932.service - OpenSSH per-connection server daemon (10.0.0.1:55932). Feb 13 15:37:06.833746 systemd-logind[1452]: Removed session 8. Feb 13 15:37:06.872028 sshd[3534]: Accepted publickey for core from 10.0.0.1 port 55932 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:06.873244 sshd-session[3534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:06.876785 systemd-logind[1452]: New session 9 of user core. Feb 13 15:37:06.889492 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:37:07.034158 sshd[3536]: Connection closed by 10.0.0.1 port 55932 Feb 13 15:37:07.035151 sshd-session[3534]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:07.043904 systemd[1]: sshd@8-10.0.0.129:22-10.0.0.1:55932.service: Deactivated successfully. Feb 13 15:37:07.048339 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:37:07.050262 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:37:07.055835 systemd[1]: Started sshd@9-10.0.0.129:22-10.0.0.1:55938.service - OpenSSH per-connection server daemon (10.0.0.1:55938). Feb 13 15:37:07.057419 systemd-logind[1452]: Removed session 9. Feb 13 15:37:07.097951 sshd[3547]: Accepted publickey for core from 10.0.0.1 port 55938 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:07.099139 sshd-session[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:07.103332 systemd-logind[1452]: New session 10 of user core. Feb 13 15:37:07.119494 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:37:07.228330 sshd[3549]: Connection closed by 10.0.0.1 port 55938 Feb 13 15:37:07.228661 sshd-session[3547]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:07.231681 systemd[1]: sshd@9-10.0.0.129:22-10.0.0.1:55938.service: Deactivated successfully. Feb 13 15:37:07.233443 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:37:07.234137 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:37:07.235052 systemd-logind[1452]: Removed session 10. Feb 13 15:37:12.242838 systemd[1]: Started sshd@10-10.0.0.129:22-10.0.0.1:55940.service - OpenSSH per-connection server daemon (10.0.0.1:55940). Feb 13 15:37:12.286847 sshd[3585]: Accepted publickey for core from 10.0.0.1 port 55940 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:12.288050 sshd-session[3585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:12.292519 systemd-logind[1452]: New session 11 of user core. Feb 13 15:37:12.303479 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:37:12.427746 sshd[3587]: Connection closed by 10.0.0.1 port 55940 Feb 13 15:37:12.428343 sshd-session[3585]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:12.438925 systemd[1]: sshd@10-10.0.0.129:22-10.0.0.1:55940.service: Deactivated successfully. Feb 13 15:37:12.442204 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:37:12.444016 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:37:12.457047 systemd[1]: Started sshd@11-10.0.0.129:22-10.0.0.1:55954.service - OpenSSH per-connection server daemon (10.0.0.1:55954). Feb 13 15:37:12.458445 systemd-logind[1452]: Removed session 11. Feb 13 15:37:12.500258 sshd[3599]: Accepted publickey for core from 10.0.0.1 port 55954 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:12.502159 sshd-session[3599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:12.506717 systemd-logind[1452]: New session 12 of user core. Feb 13 15:37:12.517546 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:37:12.780262 sshd[3601]: Connection closed by 10.0.0.1 port 55954 Feb 13 15:37:12.780639 sshd-session[3599]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:12.790763 systemd[1]: sshd@11-10.0.0.129:22-10.0.0.1:55954.service: Deactivated successfully. Feb 13 15:37:12.792217 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:37:12.793785 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:37:12.795588 systemd[1]: Started sshd@12-10.0.0.129:22-10.0.0.1:37318.service - OpenSSH per-connection server daemon (10.0.0.1:37318). Feb 13 15:37:12.796393 systemd-logind[1452]: Removed session 12. Feb 13 15:37:12.838982 sshd[3612]: Accepted publickey for core from 10.0.0.1 port 37318 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:12.840228 sshd-session[3612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:12.843915 systemd-logind[1452]: New session 13 of user core. Feb 13 15:37:12.852969 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:37:13.991885 sshd[3614]: Connection closed by 10.0.0.1 port 37318 Feb 13 15:37:13.991156 sshd-session[3612]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:14.001638 systemd[1]: sshd@12-10.0.0.129:22-10.0.0.1:37318.service: Deactivated successfully. Feb 13 15:37:14.004522 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:37:14.006731 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:37:14.013970 systemd[1]: Started sshd@13-10.0.0.129:22-10.0.0.1:37330.service - OpenSSH per-connection server daemon (10.0.0.1:37330). Feb 13 15:37:14.015561 systemd-logind[1452]: Removed session 13. Feb 13 15:37:14.078582 sshd[3635]: Accepted publickey for core from 10.0.0.1 port 37330 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:14.079840 sshd-session[3635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:14.083524 systemd-logind[1452]: New session 14 of user core. Feb 13 15:37:14.098486 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:37:14.319396 sshd[3637]: Connection closed by 10.0.0.1 port 37330 Feb 13 15:37:14.319749 sshd-session[3635]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:14.326957 systemd[1]: sshd@13-10.0.0.129:22-10.0.0.1:37330.service: Deactivated successfully. Feb 13 15:37:14.328659 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:37:14.330752 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:37:14.341728 systemd[1]: Started sshd@14-10.0.0.129:22-10.0.0.1:37338.service - OpenSSH per-connection server daemon (10.0.0.1:37338). Feb 13 15:37:14.343078 systemd-logind[1452]: Removed session 14. Feb 13 15:37:14.381110 sshd[3647]: Accepted publickey for core from 10.0.0.1 port 37338 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:14.382440 sshd-session[3647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:14.386742 systemd-logind[1452]: New session 15 of user core. Feb 13 15:37:14.396484 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:37:14.504538 sshd[3649]: Connection closed by 10.0.0.1 port 37338 Feb 13 15:37:14.504933 sshd-session[3647]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:14.507418 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:37:14.508783 systemd[1]: sshd@14-10.0.0.129:22-10.0.0.1:37338.service: Deactivated successfully. Feb 13 15:37:14.510803 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:37:14.511547 systemd-logind[1452]: Removed session 15. Feb 13 15:37:19.518991 systemd[1]: Started sshd@15-10.0.0.129:22-10.0.0.1:37346.service - OpenSSH per-connection server daemon (10.0.0.1:37346). Feb 13 15:37:19.565714 sshd[3686]: Accepted publickey for core from 10.0.0.1 port 37346 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:19.567072 sshd-session[3686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:19.573244 systemd-logind[1452]: New session 16 of user core. Feb 13 15:37:19.582555 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:37:19.710719 sshd[3688]: Connection closed by 10.0.0.1 port 37346 Feb 13 15:37:19.711549 sshd-session[3686]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:19.714245 systemd[1]: sshd@15-10.0.0.129:22-10.0.0.1:37346.service: Deactivated successfully. Feb 13 15:37:19.715887 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:37:19.717531 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:37:19.718620 systemd-logind[1452]: Removed session 16. Feb 13 15:37:24.723289 systemd[1]: Started sshd@16-10.0.0.129:22-10.0.0.1:47340.service - OpenSSH per-connection server daemon (10.0.0.1:47340). Feb 13 15:37:24.765136 sshd[3721]: Accepted publickey for core from 10.0.0.1 port 47340 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:24.766489 sshd-session[3721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:24.772760 systemd-logind[1452]: New session 17 of user core. Feb 13 15:37:24.783500 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:37:24.902359 sshd[3723]: Connection closed by 10.0.0.1 port 47340 Feb 13 15:37:24.902279 sshd-session[3721]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:24.906284 systemd[1]: sshd@16-10.0.0.129:22-10.0.0.1:47340.service: Deactivated successfully. Feb 13 15:37:24.908085 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:37:24.912692 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:37:24.913804 systemd-logind[1452]: Removed session 17. Feb 13 15:37:29.914657 systemd[1]: Started sshd@17-10.0.0.129:22-10.0.0.1:47350.service - OpenSSH per-connection server daemon (10.0.0.1:47350). Feb 13 15:37:29.957154 sshd[3758]: Accepted publickey for core from 10.0.0.1 port 47350 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:37:29.958288 sshd-session[3758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:29.963981 systemd-logind[1452]: New session 18 of user core. Feb 13 15:37:29.970555 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:37:30.077777 sshd[3760]: Connection closed by 10.0.0.1 port 47350 Feb 13 15:37:30.078102 sshd-session[3758]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:30.081076 systemd[1]: sshd@17-10.0.0.129:22-10.0.0.1:47350.service: Deactivated successfully. Feb 13 15:37:30.083918 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:37:30.085509 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:37:30.086330 systemd-logind[1452]: Removed session 18.