Feb 13 15:17:17.936673 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:17:17.936697 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 13:57:00 -00 2025 Feb 13 15:17:17.936707 kernel: KASLR enabled Feb 13 15:17:17.936713 kernel: efi: EFI v2.7 by EDK II Feb 13 15:17:17.936719 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Feb 13 15:17:17.936725 kernel: random: crng init done Feb 13 15:17:17.936732 kernel: secureboot: Secure boot disabled Feb 13 15:17:17.936738 kernel: ACPI: Early table checksum verification disabled Feb 13 15:17:17.936744 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 15:17:17.936752 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:17:17.936759 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:17:17.936765 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:17:17.936771 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:17:17.936777 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:17:17.936785 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:17:17.936793 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:17:17.936799 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:17:17.936806 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:17:17.936813 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:17:17.936819 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 15:17:17.936826 kernel: NUMA: Failed to initialise from firmware Feb 13 15:17:17.936833 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:17:17.936840 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Feb 13 15:17:17.936846 kernel: Zone ranges: Feb 13 15:17:17.936852 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:17:17.936860 kernel: DMA32 empty Feb 13 15:17:17.936866 kernel: Normal empty Feb 13 15:17:17.936872 kernel: Movable zone start for each node Feb 13 15:17:17.936879 kernel: Early memory node ranges Feb 13 15:17:17.936885 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 15:17:17.936892 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 15:17:17.936898 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 15:17:17.936905 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 15:17:17.936919 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 15:17:17.936926 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 15:17:17.936932 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 15:17:17.936939 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:17:17.936947 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 15:17:17.936954 kernel: psci: probing for conduit method from ACPI. Feb 13 15:17:17.936961 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:17:17.936970 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:17:17.936976 kernel: psci: Trusted OS migration not required Feb 13 15:17:17.936983 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:17:17.936991 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:17:17.936998 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:17:17.937005 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:17:17.937012 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 15:17:17.937019 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:17:17.937026 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:17:17.937033 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:17:17.937039 kernel: CPU features: detected: Spectre-v4 Feb 13 15:17:17.937046 kernel: CPU features: detected: Spectre-BHB Feb 13 15:17:17.937053 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:17:17.937062 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:17:17.937069 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:17:17.937076 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:17:17.937083 kernel: alternatives: applying boot alternatives Feb 13 15:17:17.937090 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:17:17.937098 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:17:17.937105 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:17:17.937112 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:17:17.937119 kernel: Fallback order for Node 0: 0 Feb 13 15:17:17.937126 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 15:17:17.937132 kernel: Policy zone: DMA Feb 13 15:17:17.937140 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:17:17.937147 kernel: software IO TLB: area num 4. Feb 13 15:17:17.937154 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 15:17:17.937161 kernel: Memory: 2386320K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185968K reserved, 0K cma-reserved) Feb 13 15:17:17.937168 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:17:17.937175 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:17:17.937183 kernel: rcu: RCU event tracing is enabled. Feb 13 15:17:17.937190 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:17:17.937197 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:17:17.937204 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:17:17.937210 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:17:17.937217 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:17:17.937231 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:17:17.937238 kernel: GICv3: 256 SPIs implemented Feb 13 15:17:17.937244 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:17:17.937251 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:17:17.937258 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:17:17.937265 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:17:17.937272 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:17:17.937279 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:17:17.937286 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:17:17.937294 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 15:17:17.937301 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 15:17:17.937309 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:17:17.937316 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:17:17.937323 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:17:17.937330 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:17:17.937337 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:17:17.937343 kernel: arm-pv: using stolen time PV Feb 13 15:17:17.937350 kernel: Console: colour dummy device 80x25 Feb 13 15:17:17.937357 kernel: ACPI: Core revision 20230628 Feb 13 15:17:17.937442 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:17:17.937450 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:17:17.937459 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:17:17.937465 kernel: landlock: Up and running. Feb 13 15:17:17.937472 kernel: SELinux: Initializing. Feb 13 15:17:17.937479 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:17:17.937486 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:17:17.937493 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:17:17.937500 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:17:17.937506 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:17:17.937513 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:17:17.937522 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:17:17.937529 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:17:17.937536 kernel: Remapping and enabling EFI services. Feb 13 15:17:17.937543 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:17:17.937549 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:17:17.937556 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:17:17.937563 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 15:17:17.937570 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:17:17.937577 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:17:17.937583 kernel: Detected PIPT I-cache on CPU2 Feb 13 15:17:17.937592 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 15:17:17.937599 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 15:17:17.937611 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:17:17.937624 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 15:17:17.937631 kernel: Detected PIPT I-cache on CPU3 Feb 13 15:17:17.937638 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 15:17:17.937646 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 15:17:17.937653 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:17:17.937661 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 15:17:17.937670 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:17:17.937677 kernel: SMP: Total of 4 processors activated. Feb 13 15:17:17.937684 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:17:17.937691 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:17:17.937698 kernel: CPU features: detected: Common not Private translations Feb 13 15:17:17.937705 kernel: CPU features: detected: CRC32 instructions Feb 13 15:17:17.937712 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:17:17.937720 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:17:17.937728 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:17:17.937735 kernel: CPU features: detected: Privileged Access Never Feb 13 15:17:17.937742 kernel: CPU features: detected: RAS Extension Support Feb 13 15:17:17.937749 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:17:17.937757 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:17:17.937769 kernel: alternatives: applying system-wide alternatives Feb 13 15:17:17.937777 kernel: devtmpfs: initialized Feb 13 15:17:17.937785 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:17:17.937795 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:17:17.937807 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:17:17.937815 kernel: SMBIOS 3.0.0 present. Feb 13 15:17:17.937825 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 15:17:17.937833 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:17:17.937840 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:17:17.937848 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:17:17.937855 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:17:17.937862 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:17:17.937869 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 15:17:17.937880 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:17:17.937887 kernel: cpuidle: using governor menu Feb 13 15:17:17.937895 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:17:17.937902 kernel: ASID allocator initialised with 32768 entries Feb 13 15:17:17.937916 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:17:17.937924 kernel: Serial: AMBA PL011 UART driver Feb 13 15:17:17.937931 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:17:17.937938 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:17:17.937945 kernel: Modules: 508960 pages in range for PLT usage Feb 13 15:17:17.937954 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:17:17.937962 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:17:17.937969 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:17:17.937976 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:17:17.937983 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:17:17.937991 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:17:17.937998 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:17:17.938005 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:17:17.938012 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:17:17.938020 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:17:17.938027 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:17:17.938034 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:17:17.938041 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:17:17.938049 kernel: ACPI: Interpreter enabled Feb 13 15:17:17.938056 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:17:17.938063 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:17:17.938070 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:17:17.938077 kernel: printk: console [ttyAMA0] enabled Feb 13 15:17:17.938086 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:17:17.938222 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:17:17.938297 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:17:17.938372 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:17:17.938454 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:17:17.938515 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:17:17.938525 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:17:17.938536 kernel: PCI host bridge to bus 0000:00 Feb 13 15:17:17.938606 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:17:17.938662 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:17:17.938747 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:17:17.938808 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:17:17.938887 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:17:17.938980 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:17:17.939051 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 15:17:17.939117 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 15:17:17.939185 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:17:17.939251 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:17:17.939316 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 15:17:17.939422 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 15:17:17.939484 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:17:17.939544 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:17:17.939601 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:17:17.939611 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:17:17.939619 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:17:17.939626 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:17:17.939634 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:17:17.939642 kernel: iommu: Default domain type: Translated Feb 13 15:17:17.939651 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:17:17.939665 kernel: efivars: Registered efivars operations Feb 13 15:17:17.939673 kernel: vgaarb: loaded Feb 13 15:17:17.939680 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:17:17.939688 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:17:17.939695 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:17:17.939703 kernel: pnp: PnP ACPI init Feb 13 15:17:17.939773 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:17:17.939783 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:17:17.939793 kernel: NET: Registered PF_INET protocol family Feb 13 15:17:17.939801 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:17:17.939808 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:17:17.939816 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:17:17.939824 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:17:17.939831 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:17:17.939838 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:17:17.939846 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:17:17.939853 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:17:17.939862 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:17:17.939871 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:17:17.939878 kernel: kvm [1]: HYP mode not available Feb 13 15:17:17.939885 kernel: Initialise system trusted keyrings Feb 13 15:17:17.939893 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:17:17.939900 kernel: Key type asymmetric registered Feb 13 15:17:17.939915 kernel: Asymmetric key parser 'x509' registered Feb 13 15:17:17.939924 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:17:17.939931 kernel: io scheduler mq-deadline registered Feb 13 15:17:17.939941 kernel: io scheduler kyber registered Feb 13 15:17:17.939948 kernel: io scheduler bfq registered Feb 13 15:17:17.939956 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:17:17.939963 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:17:17.939971 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:17:17.940045 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 15:17:17.940055 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:17:17.940063 kernel: thunder_xcv, ver 1.0 Feb 13 15:17:17.940070 kernel: thunder_bgx, ver 1.0 Feb 13 15:17:17.940079 kernel: nicpf, ver 1.0 Feb 13 15:17:17.940087 kernel: nicvf, ver 1.0 Feb 13 15:17:17.940161 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:17:17.940224 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:17:17 UTC (1739459837) Feb 13 15:17:17.940234 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:17:17.940241 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:17:17.940249 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:17:17.940256 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:17:17.940266 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:17:17.940273 kernel: Segment Routing with IPv6 Feb 13 15:17:17.940281 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:17:17.940288 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:17:17.940296 kernel: Key type dns_resolver registered Feb 13 15:17:17.940303 kernel: registered taskstats version 1 Feb 13 15:17:17.940310 kernel: Loading compiled-in X.509 certificates Feb 13 15:17:17.940318 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4531cdb19689f90a81e7969ac7d8e25a95254f51' Feb 13 15:17:17.940325 kernel: Key type .fscrypt registered Feb 13 15:17:17.940334 kernel: Key type fscrypt-provisioning registered Feb 13 15:17:17.940342 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:17:17.940350 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:17:17.940357 kernel: ima: No architecture policies found Feb 13 15:17:17.940375 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:17:17.940382 kernel: clk: Disabling unused clocks Feb 13 15:17:17.940390 kernel: Freeing unused kernel memory: 39680K Feb 13 15:17:17.940397 kernel: Run /init as init process Feb 13 15:17:17.940405 kernel: with arguments: Feb 13 15:17:17.940419 kernel: /init Feb 13 15:17:17.940430 kernel: with environment: Feb 13 15:17:17.940437 kernel: HOME=/ Feb 13 15:17:17.940444 kernel: TERM=linux Feb 13 15:17:17.940451 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:17:17.940461 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:17:17.940471 systemd[1]: Detected virtualization kvm. Feb 13 15:17:17.940479 systemd[1]: Detected architecture arm64. Feb 13 15:17:17.940488 systemd[1]: Running in initrd. Feb 13 15:17:17.940495 systemd[1]: No hostname configured, using default hostname. Feb 13 15:17:17.940503 systemd[1]: Hostname set to . Feb 13 15:17:17.940511 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:17:17.940519 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:17:17.940527 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:17:17.940535 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:17:17.940543 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:17:17.940553 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:17:17.940561 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:17:17.940569 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:17:17.940579 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:17:17.940587 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:17:17.940595 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:17:17.940603 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:17:17.940613 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:17:17.940621 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:17:17.940629 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:17:17.940637 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:17:17.940645 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:17:17.940653 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:17:17.940661 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:17:17.940668 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:17:17.940678 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:17:17.940686 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:17:17.940694 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:17:17.940702 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:17:17.940710 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:17:17.940718 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:17:17.940727 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:17:17.940734 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:17:17.940742 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:17:17.940752 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:17:17.940760 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:17:17.940768 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:17:17.940776 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:17:17.940784 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:17:17.940792 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:17:17.940822 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 15:17:17.940842 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:17:17.940852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:17:17.940861 systemd-journald[238]: Journal started Feb 13 15:17:17.940880 systemd-journald[238]: Runtime Journal (/run/log/journal/9f386a7ff2684f829304e383bcf9e1fd) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:17:17.936642 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 15:17:17.943261 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:17:17.953415 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:17:17.954577 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:17:17.957712 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 15:17:17.958610 kernel: Bridge firewalling registered Feb 13 15:17:17.958541 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:17:17.960335 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:17:17.962737 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:17:17.968428 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:17:17.969629 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:17:17.971791 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:17:17.975955 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:17:17.978072 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:17:17.980489 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:17:17.982999 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:17:17.992544 dracut-cmdline[276]: dracut-dracut-053 Feb 13 15:17:17.995068 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:17:18.013029 systemd-resolved[278]: Positive Trust Anchors: Feb 13 15:17:18.013101 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:17:18.013133 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:17:18.017992 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 13 15:17:18.019095 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:17:18.022404 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:17:18.068397 kernel: SCSI subsystem initialized Feb 13 15:17:18.073387 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:17:18.080387 kernel: iscsi: registered transport (tcp) Feb 13 15:17:18.097598 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:17:18.097630 kernel: QLogic iSCSI HBA Driver Feb 13 15:17:18.142919 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:17:18.156531 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:17:18.175111 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:17:18.175158 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:17:18.175191 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:17:18.222403 kernel: raid6: neonx8 gen() 15764 MB/s Feb 13 15:17:18.239391 kernel: raid6: neonx4 gen() 15634 MB/s Feb 13 15:17:18.256390 kernel: raid6: neonx2 gen() 13189 MB/s Feb 13 15:17:18.273383 kernel: raid6: neonx1 gen() 10480 MB/s Feb 13 15:17:18.290388 kernel: raid6: int64x8 gen() 6940 MB/s Feb 13 15:17:18.307386 kernel: raid6: int64x4 gen() 7319 MB/s Feb 13 15:17:18.324387 kernel: raid6: int64x2 gen() 6118 MB/s Feb 13 15:17:18.341593 kernel: raid6: int64x1 gen() 5049 MB/s Feb 13 15:17:18.341613 kernel: raid6: using algorithm neonx8 gen() 15764 MB/s Feb 13 15:17:18.359536 kernel: raid6: .... xor() 11926 MB/s, rmw enabled Feb 13 15:17:18.359556 kernel: raid6: using neon recovery algorithm Feb 13 15:17:18.364382 kernel: xor: measuring software checksum speed Feb 13 15:17:18.365660 kernel: 8regs : 17488 MB/sec Feb 13 15:17:18.365674 kernel: 32regs : 19636 MB/sec Feb 13 15:17:18.366944 kernel: arm64_neon : 26778 MB/sec Feb 13 15:17:18.366957 kernel: xor: using function: arm64_neon (26778 MB/sec) Feb 13 15:17:18.419385 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:17:18.429965 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:17:18.442529 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:17:18.454629 systemd-udevd[461]: Using default interface naming scheme 'v255'. Feb 13 15:17:18.457751 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:17:18.474591 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:17:18.487983 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Feb 13 15:17:18.515672 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:17:18.526565 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:17:18.569514 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:17:18.578609 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:17:18.588752 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:17:18.590884 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:17:18.594180 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:17:18.595581 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:17:18.606531 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:17:18.617582 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:17:18.625543 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:17:18.625665 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:17:18.628927 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:17:18.630046 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:17:18.630191 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:17:18.641663 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 15:17:18.645423 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:17:18.645523 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:17:18.645534 kernel: GPT:9289727 != 19775487 Feb 13 15:17:18.645544 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:17:18.645553 kernel: GPT:9289727 != 19775487 Feb 13 15:17:18.645562 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:17:18.645578 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:17:18.632293 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:17:18.649632 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:17:18.664757 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (512) Feb 13 15:17:18.664803 kernel: BTRFS: device fsid 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (508) Feb 13 15:17:18.665746 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:17:18.667979 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:17:18.673759 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:17:18.681171 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:17:18.685046 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:17:18.686191 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:17:18.703552 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:17:18.705471 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:17:18.711080 disk-uuid[549]: Primary Header is updated. Feb 13 15:17:18.711080 disk-uuid[549]: Secondary Entries is updated. Feb 13 15:17:18.711080 disk-uuid[549]: Secondary Header is updated. Feb 13 15:17:18.714112 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:17:18.738249 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:17:19.727913 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:17:19.728178 disk-uuid[550]: The operation has completed successfully. Feb 13 15:17:19.752116 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:17:19.752219 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:17:19.768541 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:17:19.772686 sh[570]: Success Feb 13 15:17:19.788410 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:17:19.826856 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:17:19.828668 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:17:19.830965 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:17:19.843822 kernel: BTRFS info (device dm-0): first mount of filesystem 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 Feb 13 15:17:19.843882 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:17:19.846392 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:17:19.846436 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:17:19.846447 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:17:19.851032 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:17:19.852522 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:17:19.862527 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:17:19.864261 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:17:19.873200 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:17:19.873244 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:17:19.873254 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:17:19.876394 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:17:19.884719 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:17:19.886082 kernel: BTRFS info (device vda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:17:19.891601 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:17:19.898597 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:17:19.967877 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:17:19.984561 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:17:20.006736 ignition[662]: Ignition 2.20.0 Feb 13 15:17:20.006747 ignition[662]: Stage: fetch-offline Feb 13 15:17:20.006781 ignition[662]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:17:20.006790 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:17:20.006952 ignition[662]: parsed url from cmdline: "" Feb 13 15:17:20.006956 ignition[662]: no config URL provided Feb 13 15:17:20.006960 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:17:20.006968 ignition[662]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:17:20.006995 ignition[662]: op(1): [started] loading QEMU firmware config module Feb 13 15:17:20.006999 ignition[662]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:17:20.016143 systemd-networkd[764]: lo: Link UP Feb 13 15:17:20.016157 systemd-networkd[764]: lo: Gained carrier Feb 13 15:17:20.018944 systemd-networkd[764]: Enumeration completed Feb 13 15:17:20.019081 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:17:20.022168 ignition[662]: op(1): [finished] loading QEMU firmware config module Feb 13 15:17:20.019493 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:17:20.019497 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:17:20.020281 systemd-networkd[764]: eth0: Link UP Feb 13 15:17:20.020284 systemd-networkd[764]: eth0: Gained carrier Feb 13 15:17:20.020291 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:17:20.020563 systemd[1]: Reached target network.target - Network. Feb 13 15:17:20.033415 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:17:20.051857 ignition[662]: parsing config with SHA512: 8d7197e2fa946591a654c1e4fa41fd0d563aac2114ed95e97766f909281c376838ffd0999d3c36a3285be76a2832fce9b024b90a977f92e6c04a408a1737b0e2 Feb 13 15:17:20.058094 unknown[662]: fetched base config from "system" Feb 13 15:17:20.058106 unknown[662]: fetched user config from "qemu" Feb 13 15:17:20.058546 ignition[662]: fetch-offline: fetch-offline passed Feb 13 15:17:20.060811 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:17:20.058623 ignition[662]: Ignition finished successfully Feb 13 15:17:20.062222 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:17:20.068583 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:17:20.079721 ignition[770]: Ignition 2.20.0 Feb 13 15:17:20.079732 ignition[770]: Stage: kargs Feb 13 15:17:20.079924 ignition[770]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:17:20.079935 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:17:20.084118 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:17:20.080851 ignition[770]: kargs: kargs passed Feb 13 15:17:20.080908 ignition[770]: Ignition finished successfully Feb 13 15:17:20.094559 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:17:20.104188 ignition[778]: Ignition 2.20.0 Feb 13 15:17:20.104199 ignition[778]: Stage: disks Feb 13 15:17:20.104378 ignition[778]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:17:20.106878 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:17:20.104389 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:17:20.108518 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:17:20.105243 ignition[778]: disks: disks passed Feb 13 15:17:20.110166 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:17:20.105287 ignition[778]: Ignition finished successfully Feb 13 15:17:20.112089 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:17:20.113793 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:17:20.115279 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:17:20.127557 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:17:20.137875 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:17:20.141794 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:17:20.154472 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:17:20.198173 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:17:20.199801 kernel: EXT4-fs (vda9): mounted filesystem b8d8a7c2-9667-48db-9266-035fd118dfdf r/w with ordered data mode. Quota mode: none. Feb 13 15:17:20.199628 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:17:20.212469 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:17:20.214208 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:17:20.215625 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:17:20.220419 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (797) Feb 13 15:17:20.215668 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:17:20.215692 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:17:20.228163 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:17:20.228195 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:17:20.228206 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:17:20.220337 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:17:20.222083 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:17:20.231386 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:17:20.233109 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:17:20.267115 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:17:20.271811 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:17:20.275760 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:17:20.279400 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:17:20.371262 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:17:20.388481 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:17:20.391023 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:17:20.396389 kernel: BTRFS info (device vda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:17:20.413320 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:17:20.415216 ignition[910]: INFO : Ignition 2.20.0 Feb 13 15:17:20.415216 ignition[910]: INFO : Stage: mount Feb 13 15:17:20.415216 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:17:20.415216 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:17:20.415216 ignition[910]: INFO : mount: mount passed Feb 13 15:17:20.415216 ignition[910]: INFO : Ignition finished successfully Feb 13 15:17:20.416706 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:17:20.429483 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:17:20.842440 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:17:20.851583 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:17:20.858288 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (923) Feb 13 15:17:20.858326 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:17:20.858338 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:17:20.859995 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:17:20.862376 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:17:20.863263 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:17:20.884520 ignition[940]: INFO : Ignition 2.20.0 Feb 13 15:17:20.884520 ignition[940]: INFO : Stage: files Feb 13 15:17:20.886177 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:17:20.886177 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:17:20.886177 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:17:20.889356 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:17:20.889356 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:17:20.892358 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:17:20.893647 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:17:20.893647 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:17:20.893077 unknown[940]: wrote ssh authorized keys file for user: core Feb 13 15:17:20.897099 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:17:20.897099 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:17:21.121790 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:17:21.367526 systemd-networkd[764]: eth0: Gained IPv6LL Feb 13 15:17:22.736110 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:17:22.738444 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:17:22.738444 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:17:22.738444 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:17:22.738444 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:17:22.738444 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:17:22.738444 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:17:22.738444 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:17:22.738444 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:17:22.738444 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:17:22.738444 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:17:22.738444 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:17:22.738444 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:17:22.738444 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:17:22.738444 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Feb 13 15:17:23.078729 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:17:23.316898 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:17:23.316898 ignition[940]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:17:23.324208 ignition[940]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:17:23.326302 ignition[940]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:17:23.326302 ignition[940]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:17:23.326302 ignition[940]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 15:17:23.326302 ignition[940]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:17:23.326302 ignition[940]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:17:23.326302 ignition[940]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 15:17:23.326302 ignition[940]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:17:23.351953 ignition[940]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:17:23.356984 ignition[940]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:17:23.359676 ignition[940]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:17:23.359676 ignition[940]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:17:23.359676 ignition[940]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:17:23.359676 ignition[940]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:17:23.359676 ignition[940]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:17:23.359676 ignition[940]: INFO : files: files passed Feb 13 15:17:23.359676 ignition[940]: INFO : Ignition finished successfully Feb 13 15:17:23.361385 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:17:23.373564 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:17:23.376184 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:17:23.377870 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:17:23.377957 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:17:23.385539 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:17:23.389008 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:17:23.389008 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:17:23.392238 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:17:23.391697 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:17:23.393866 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:17:23.405537 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:17:23.433571 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:17:23.433704 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:17:23.436024 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:17:23.437684 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:17:23.439491 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:17:23.440332 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:17:23.457407 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:17:23.460075 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:17:23.472334 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:17:23.473577 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:17:23.475646 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:17:23.477472 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:17:23.477607 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:17:23.480128 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:17:23.482162 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:17:23.483777 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:17:23.485466 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:17:23.487494 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:17:23.489494 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:17:23.491226 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:17:23.493312 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:17:23.495274 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:17:23.497054 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:17:23.498609 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:17:23.498739 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:17:23.501213 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:17:23.503306 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:17:23.505223 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:17:23.508430 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:17:23.509696 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:17:23.509821 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:17:23.512659 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:17:23.512772 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:17:23.514825 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:17:23.516332 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:17:23.518456 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:17:23.520183 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:17:23.522303 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:17:23.523834 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:17:23.523929 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:17:23.525411 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:17:23.525494 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:17:23.527036 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:17:23.527141 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:17:23.528841 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:17:23.528949 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:17:23.540532 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:17:23.542171 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:17:23.543163 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:17:23.543277 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:17:23.545391 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:17:23.545494 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:17:23.551023 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:17:23.551114 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:17:23.554269 ignition[997]: INFO : Ignition 2.20.0 Feb 13 15:17:23.554269 ignition[997]: INFO : Stage: umount Feb 13 15:17:23.554269 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:17:23.554269 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:17:23.559983 ignition[997]: INFO : umount: umount passed Feb 13 15:17:23.559983 ignition[997]: INFO : Ignition finished successfully Feb 13 15:17:23.557090 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:17:23.557250 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:17:23.559169 systemd[1]: Stopped target network.target - Network. Feb 13 15:17:23.561013 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:17:23.561072 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:17:23.562653 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:17:23.562707 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:17:23.564433 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:17:23.564485 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:17:23.566486 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:17:23.566536 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:17:23.568703 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:17:23.570320 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:17:23.574883 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:17:23.578421 systemd-networkd[764]: eth0: DHCPv6 lease lost Feb 13 15:17:23.578438 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:17:23.578671 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:17:23.581538 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:17:23.581779 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:17:23.584478 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:17:23.584576 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:17:23.593753 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:17:23.594608 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:17:23.594669 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:17:23.596616 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:17:23.596668 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:17:23.598226 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:17:23.598284 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:17:23.600449 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:17:23.600505 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:17:23.603281 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:17:23.618678 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:17:23.618836 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:17:23.620503 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:17:23.620589 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:17:23.629091 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:17:23.629154 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:17:23.630929 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:17:23.630965 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:17:23.634525 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:17:23.634588 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:17:23.635754 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:17:23.635799 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:17:23.639420 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:17:23.639475 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:17:23.655521 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:17:23.656570 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:17:23.656629 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:17:23.658787 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:17:23.658836 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:17:23.660704 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:17:23.660749 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:17:23.663078 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:17:23.663126 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:17:23.665349 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:17:23.665455 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:17:23.667219 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:17:23.667355 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:17:23.669514 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:17:23.670951 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:17:23.671025 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:17:23.678482 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:17:23.685562 systemd[1]: Switching root. Feb 13 15:17:23.712461 systemd-journald[238]: Journal stopped Feb 13 15:17:24.493731 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 15:17:24.493787 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:17:24.493799 kernel: SELinux: policy capability open_perms=1 Feb 13 15:17:24.493809 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:17:24.493819 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:17:24.493828 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:17:24.493838 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:17:24.493857 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:17:24.493868 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:17:24.493882 kernel: audit: type=1403 audit(1739459843.843:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:17:24.493895 systemd[1]: Successfully loaded SELinux policy in 32.362ms. Feb 13 15:17:24.493911 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.206ms. Feb 13 15:17:24.493923 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:17:24.493934 systemd[1]: Detected virtualization kvm. Feb 13 15:17:24.493944 systemd[1]: Detected architecture arm64. Feb 13 15:17:24.493955 systemd[1]: Detected first boot. Feb 13 15:17:24.493965 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:17:24.493976 zram_generator::config[1040]: No configuration found. Feb 13 15:17:24.493990 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:17:24.494001 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:17:24.494012 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:17:24.494022 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:17:24.494033 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:17:24.494044 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:17:24.494055 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:17:24.494065 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:17:24.494076 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:17:24.494088 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:17:24.494099 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:17:24.494110 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:17:24.494120 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:17:24.494131 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:17:24.494142 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:17:24.494153 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:17:24.494164 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:17:24.494176 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:17:24.494187 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:17:24.494198 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:17:24.494208 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:17:24.494220 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:17:24.494231 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:17:24.494241 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:17:24.494252 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:17:24.494265 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:17:24.494276 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:17:24.494286 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:17:24.494297 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:17:24.494311 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:17:24.494322 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:17:24.494332 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:17:24.494343 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:17:24.494354 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:17:24.494373 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:17:24.494401 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:17:24.494413 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:17:24.494424 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:17:24.494435 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:17:24.494445 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:17:24.494456 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:17:24.494467 systemd[1]: Reached target machines.target - Containers. Feb 13 15:17:24.494479 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:17:24.494493 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:17:24.494504 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:17:24.494514 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:17:24.494525 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:17:24.494535 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:17:24.494546 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:17:24.494557 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:17:24.494567 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:17:24.494583 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:17:24.494599 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:17:24.494610 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:17:24.494620 kernel: fuse: init (API version 7.39) Feb 13 15:17:24.494631 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:17:24.494642 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:17:24.494652 kernel: loop: module loaded Feb 13 15:17:24.494662 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:17:24.494673 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:17:24.494683 kernel: ACPI: bus type drm_connector registered Feb 13 15:17:24.494695 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:17:24.494705 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:17:24.494737 systemd-journald[1107]: Collecting audit messages is disabled. Feb 13 15:17:24.494758 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:17:24.494770 systemd-journald[1107]: Journal started Feb 13 15:17:24.494794 systemd-journald[1107]: Runtime Journal (/run/log/journal/9f386a7ff2684f829304e383bcf9e1fd) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:17:24.269688 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:17:24.290231 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:17:24.290635 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:17:24.498391 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:17:24.498448 systemd[1]: Stopped verity-setup.service. Feb 13 15:17:24.504749 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:17:24.504503 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:17:24.506012 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:17:24.508049 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:17:24.509249 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:17:24.510529 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:17:24.511981 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:17:24.514510 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:17:24.516019 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:17:24.517558 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:17:24.517716 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:17:24.519101 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:17:24.519242 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:17:24.520706 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:17:24.520870 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:17:24.522252 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:17:24.522437 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:17:24.523936 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:17:24.524075 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:17:24.525437 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:17:24.525578 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:17:24.526975 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:17:24.528384 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:17:24.529815 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:17:24.543996 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:17:24.556487 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:17:24.558873 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:17:24.560118 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:17:24.560180 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:17:24.562414 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:17:24.564701 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:17:24.567097 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:17:24.568256 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:17:24.569902 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:17:24.571993 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:17:24.573332 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:17:24.577559 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:17:24.579881 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:17:24.581626 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:17:24.586444 systemd-journald[1107]: Time spent on flushing to /var/log/journal/9f386a7ff2684f829304e383bcf9e1fd is 16.527ms for 855 entries. Feb 13 15:17:24.586444 systemd-journald[1107]: System Journal (/var/log/journal/9f386a7ff2684f829304e383bcf9e1fd) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:17:24.611044 systemd-journald[1107]: Received client request to flush runtime journal. Feb 13 15:17:24.611107 kernel: loop0: detected capacity change from 0 to 194512 Feb 13 15:17:24.587063 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:17:24.591154 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:17:24.597805 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:17:24.599123 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:17:24.601668 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:17:24.603783 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:17:24.605352 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:17:24.609911 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:17:24.614577 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:17:24.617710 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:17:24.620717 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:17:24.631397 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:17:24.645411 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:17:24.647885 udevadm[1162]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:17:24.649087 systemd-tmpfiles[1152]: ACLs are not supported, ignoring. Feb 13 15:17:24.649103 systemd-tmpfiles[1152]: ACLs are not supported, ignoring. Feb 13 15:17:24.655916 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:17:24.659189 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:17:24.661042 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:17:24.664673 kernel: loop1: detected capacity change from 0 to 113536 Feb 13 15:17:24.673591 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:17:24.689462 kernel: loop2: detected capacity change from 0 to 116808 Feb 13 15:17:24.697604 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:17:24.712650 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:17:24.724685 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 15:17:24.724706 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 15:17:24.729152 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:17:24.736416 kernel: loop3: detected capacity change from 0 to 194512 Feb 13 15:17:24.743411 kernel: loop4: detected capacity change from 0 to 113536 Feb 13 15:17:24.751504 kernel: loop5: detected capacity change from 0 to 116808 Feb 13 15:17:24.755246 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:17:24.756810 (sd-merge)[1178]: Merged extensions into '/usr'. Feb 13 15:17:24.760113 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:17:24.760131 systemd[1]: Reloading... Feb 13 15:17:24.839449 zram_generator::config[1203]: No configuration found. Feb 13 15:17:24.884379 ldconfig[1146]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:17:24.930678 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:17:24.967702 systemd[1]: Reloading finished in 206 ms. Feb 13 15:17:24.999408 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:17:25.000835 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:17:25.019625 systemd[1]: Starting ensure-sysext.service... Feb 13 15:17:25.021738 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:17:25.031338 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:17:25.031349 systemd[1]: Reloading... Feb 13 15:17:25.046226 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:17:25.048253 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:17:25.049041 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:17:25.049437 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Feb 13 15:17:25.049562 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Feb 13 15:17:25.056853 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:17:25.057000 systemd-tmpfiles[1240]: Skipping /boot Feb 13 15:17:25.066617 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:17:25.066747 systemd-tmpfiles[1240]: Skipping /boot Feb 13 15:17:25.089430 zram_generator::config[1265]: No configuration found. Feb 13 15:17:25.175177 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:17:25.211424 systemd[1]: Reloading finished in 179 ms. Feb 13 15:17:25.226915 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:17:25.243908 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:17:25.252805 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:17:25.255511 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:17:25.257873 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:17:25.262711 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:17:25.274880 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:17:25.279721 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:17:25.283727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:17:25.285665 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:17:25.289146 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:17:25.293685 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:17:25.294865 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:17:25.299058 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:17:25.300826 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:17:25.305001 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:17:25.305139 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:17:25.308953 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:17:25.315949 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:17:25.319257 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:17:25.319717 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:17:25.321320 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:17:25.321480 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:17:25.326679 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:17:25.330397 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:17:25.334046 systemd-udevd[1307]: Using default interface naming scheme 'v255'. Feb 13 15:17:25.335787 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:17:25.342818 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:17:25.343897 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:17:25.345142 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:17:25.346961 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:17:25.347108 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:17:25.353697 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:17:25.355497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:17:25.355633 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:17:25.357419 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:17:25.357558 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:17:25.364499 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:17:25.373813 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:17:25.420740 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:17:25.425997 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:17:25.432322 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:17:25.441923 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:17:25.443317 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:17:25.443568 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:17:25.444795 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:17:25.449190 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:17:25.451445 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:17:25.451732 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:17:25.453599 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:17:25.453741 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:17:25.461673 systemd[1]: Finished ensure-sysext.service. Feb 13 15:17:25.466293 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:17:25.467462 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:17:25.471790 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:17:25.471979 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:17:25.473250 augenrules[1375]: No rules Feb 13 15:17:25.475898 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:17:25.477449 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:17:25.497472 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1353) Feb 13 15:17:25.503241 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:17:25.520569 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:17:25.523009 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:17:25.523098 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:17:25.528560 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:17:25.537717 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:17:25.540302 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:17:25.588803 systemd-resolved[1305]: Positive Trust Anchors: Feb 13 15:17:25.589275 systemd-resolved[1305]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:17:25.589307 systemd-resolved[1305]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:17:25.593548 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:17:25.594989 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:17:25.598418 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:17:25.608742 systemd-resolved[1305]: Defaulting to hostname 'linux'. Feb 13 15:17:25.610744 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:17:25.613320 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:17:25.644629 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:17:25.647908 systemd-networkd[1390]: lo: Link UP Feb 13 15:17:25.647917 systemd-networkd[1390]: lo: Gained carrier Feb 13 15:17:25.648935 systemd-networkd[1390]: Enumeration completed Feb 13 15:17:25.649035 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:17:25.649638 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:17:25.649646 systemd-networkd[1390]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:17:25.650451 systemd[1]: Reached target network.target - Network. Feb 13 15:17:25.650466 systemd-networkd[1390]: eth0: Link UP Feb 13 15:17:25.650470 systemd-networkd[1390]: eth0: Gained carrier Feb 13 15:17:25.650483 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:17:25.655101 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:17:25.656611 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:17:25.663740 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:17:25.674460 systemd-networkd[1390]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:17:25.675761 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Feb 13 15:17:25.676724 systemd-timesyncd[1391]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:17:25.676769 systemd-timesyncd[1391]: Initial clock synchronization to Thu 2025-02-13 15:17:25.736484 UTC. Feb 13 15:17:25.705738 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:17:25.712563 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:17:25.751066 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:17:25.752715 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:17:25.754059 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:17:25.755316 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:17:25.756625 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:17:25.758595 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:17:25.759818 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:17:25.761112 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:17:25.762620 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:17:25.762661 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:17:25.763633 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:17:25.765198 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:17:25.767827 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:17:25.774378 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:17:25.776765 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:17:25.778407 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:17:25.779614 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:17:25.780577 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:17:25.781607 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:17:25.781641 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:17:25.782638 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:17:25.787400 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:17:25.784904 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:17:25.788751 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:17:25.793694 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:17:25.794720 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:17:25.795810 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:17:25.801343 jq[1414]: false Feb 13 15:17:25.799517 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:17:25.802116 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:17:25.804764 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:17:25.811650 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:17:25.817708 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:17:25.818231 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:17:25.821007 extend-filesystems[1415]: Found loop3 Feb 13 15:17:25.829456 extend-filesystems[1415]: Found loop4 Feb 13 15:17:25.829456 extend-filesystems[1415]: Found loop5 Feb 13 15:17:25.829456 extend-filesystems[1415]: Found vda Feb 13 15:17:25.829456 extend-filesystems[1415]: Found vda1 Feb 13 15:17:25.829456 extend-filesystems[1415]: Found vda2 Feb 13 15:17:25.829456 extend-filesystems[1415]: Found vda3 Feb 13 15:17:25.829456 extend-filesystems[1415]: Found usr Feb 13 15:17:25.829456 extend-filesystems[1415]: Found vda4 Feb 13 15:17:25.829456 extend-filesystems[1415]: Found vda6 Feb 13 15:17:25.829456 extend-filesystems[1415]: Found vda7 Feb 13 15:17:25.829456 extend-filesystems[1415]: Found vda9 Feb 13 15:17:25.829456 extend-filesystems[1415]: Checking size of /dev/vda9 Feb 13 15:17:25.855989 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1353) Feb 13 15:17:25.825564 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:17:25.833610 dbus-daemon[1413]: [system] SELinux support is enabled Feb 13 15:17:25.856337 extend-filesystems[1415]: Resized partition /dev/vda9 Feb 13 15:17:25.830520 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:17:25.836201 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:17:25.838654 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:17:25.857741 jq[1433]: true Feb 13 15:17:25.870120 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:17:25.849585 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:17:25.870235 extend-filesystems[1437]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:17:25.849789 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:17:25.850065 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:17:25.850213 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:17:25.868968 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:17:25.869150 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:17:25.877620 update_engine[1429]: I20250213 15:17:25.876772 1429 main.cc:92] Flatcar Update Engine starting Feb 13 15:17:25.880467 update_engine[1429]: I20250213 15:17:25.880306 1429 update_check_scheduler.cc:74] Next update check in 11m23s Feb 13 15:17:25.896133 (ntainerd)[1444]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:17:25.897493 jq[1440]: true Feb 13 15:17:25.902870 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:17:25.914516 tar[1439]: linux-arm64/helm Feb 13 15:17:25.918952 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:17:25.918391 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:17:25.918424 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:17:25.920578 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:17:25.920601 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:17:25.930667 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:17:25.935596 extend-filesystems[1437]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:17:25.935596 extend-filesystems[1437]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:17:25.935596 extend-filesystems[1437]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:17:25.949397 extend-filesystems[1415]: Resized filesystem in /dev/vda9 Feb 13 15:17:25.935856 systemd-logind[1423]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:17:25.936572 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:17:25.936784 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:17:25.937606 systemd-logind[1423]: New seat seat0. Feb 13 15:17:25.941228 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:17:25.996221 bash[1470]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:17:25.997942 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:17:26.000024 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:17:26.007035 locksmithd[1458]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:17:26.109399 containerd[1444]: time="2025-02-13T15:17:26.108466031Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:17:26.135669 containerd[1444]: time="2025-02-13T15:17:26.135620355Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:26.137244 containerd[1444]: time="2025-02-13T15:17:26.137207912Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:26.137412 containerd[1444]: time="2025-02-13T15:17:26.137391556Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:17:26.137499 containerd[1444]: time="2025-02-13T15:17:26.137482455Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:17:26.138408 containerd[1444]: time="2025-02-13T15:17:26.137688904Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:17:26.138408 containerd[1444]: time="2025-02-13T15:17:26.137713034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:26.138408 containerd[1444]: time="2025-02-13T15:17:26.137768802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:26.138408 containerd[1444]: time="2025-02-13T15:17:26.137780887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:26.138408 containerd[1444]: time="2025-02-13T15:17:26.137942570Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:26.138408 containerd[1444]: time="2025-02-13T15:17:26.137957706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:26.138408 containerd[1444]: time="2025-02-13T15:17:26.137971518Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:26.138408 containerd[1444]: time="2025-02-13T15:17:26.137980832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:26.138408 containerd[1444]: time="2025-02-13T15:17:26.138056876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:26.138408 containerd[1444]: time="2025-02-13T15:17:26.138235100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:26.138408 containerd[1444]: time="2025-02-13T15:17:26.138321181Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:26.138656 containerd[1444]: time="2025-02-13T15:17:26.138333708Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:17:26.138656 containerd[1444]: time="2025-02-13T15:17:26.138444239Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:17:26.138656 containerd[1444]: time="2025-02-13T15:17:26.138606163Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:17:26.142395 containerd[1444]: time="2025-02-13T15:17:26.142100506Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:17:26.142395 containerd[1444]: time="2025-02-13T15:17:26.142149087Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:17:26.142395 containerd[1444]: time="2025-02-13T15:17:26.142163822Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:17:26.142395 containerd[1444]: time="2025-02-13T15:17:26.142179159Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:17:26.142395 containerd[1444]: time="2025-02-13T15:17:26.142195019Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:17:26.142395 containerd[1444]: time="2025-02-13T15:17:26.142329319Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:17:26.142603 containerd[1444]: time="2025-02-13T15:17:26.142580335Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:17:26.142714 containerd[1444]: time="2025-02-13T15:17:26.142694360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:17:26.142748 containerd[1444]: time="2025-02-13T15:17:26.142716000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:17:26.142748 containerd[1444]: time="2025-02-13T15:17:26.142730414Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:17:26.142748 containerd[1444]: time="2025-02-13T15:17:26.142744667Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:17:26.142799 containerd[1444]: time="2025-02-13T15:17:26.142757836Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:17:26.142799 containerd[1444]: time="2025-02-13T15:17:26.142771045Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:17:26.142799 containerd[1444]: time="2025-02-13T15:17:26.142783733Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:17:26.142799 containerd[1444]: time="2025-02-13T15:17:26.142798026Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:17:26.142874 containerd[1444]: time="2025-02-13T15:17:26.142822598Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:17:26.142874 containerd[1444]: time="2025-02-13T15:17:26.142835486Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:17:26.142874 containerd[1444]: time="2025-02-13T15:17:26.142847009Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:17:26.142874 containerd[1444]: time="2025-02-13T15:17:26.142865999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:17:26.142939 containerd[1444]: time="2025-02-13T15:17:26.142879329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:17:26.142939 containerd[1444]: time="2025-02-13T15:17:26.142893060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:17:26.142939 containerd[1444]: time="2025-02-13T15:17:26.142905225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:17:26.142939 containerd[1444]: time="2025-02-13T15:17:26.142918515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:17:26.142939 containerd[1444]: time="2025-02-13T15:17:26.142931202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:17:26.143027 containerd[1444]: time="2025-02-13T15:17:26.142943327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:17:26.143027 containerd[1444]: time="2025-02-13T15:17:26.142955814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:17:26.143027 containerd[1444]: time="2025-02-13T15:17:26.142968461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:17:26.143027 containerd[1444]: time="2025-02-13T15:17:26.142982835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:17:26.143027 containerd[1444]: time="2025-02-13T15:17:26.142994277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:17:26.143027 containerd[1444]: time="2025-02-13T15:17:26.143006242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:17:26.143027 containerd[1444]: time="2025-02-13T15:17:26.143017885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:17:26.143143 containerd[1444]: time="2025-02-13T15:17:26.143031054Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:17:26.143143 containerd[1444]: time="2025-02-13T15:17:26.143052294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:17:26.143143 containerd[1444]: time="2025-02-13T15:17:26.143065101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:17:26.143143 containerd[1444]: time="2025-02-13T15:17:26.143076945Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:17:26.143819 containerd[1444]: time="2025-02-13T15:17:26.143345868Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:17:26.143819 containerd[1444]: time="2025-02-13T15:17:26.143400672Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:17:26.143819 containerd[1444]: time="2025-02-13T15:17:26.143412717Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:17:26.143819 containerd[1444]: time="2025-02-13T15:17:26.143424441Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:17:26.143819 containerd[1444]: time="2025-02-13T15:17:26.143433434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:17:26.143819 containerd[1444]: time="2025-02-13T15:17:26.143445880Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:17:26.143819 containerd[1444]: time="2025-02-13T15:17:26.143455637Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:17:26.143819 containerd[1444]: time="2025-02-13T15:17:26.143469689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:17:26.144054 containerd[1444]: time="2025-02-13T15:17:26.143872711Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:17:26.144054 containerd[1444]: time="2025-02-13T15:17:26.143927234Z" level=info msg="Connect containerd service" Feb 13 15:17:26.144054 containerd[1444]: time="2025-02-13T15:17:26.143961522Z" level=info msg="using legacy CRI server" Feb 13 15:17:26.144054 containerd[1444]: time="2025-02-13T15:17:26.143968348Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:17:26.144205 containerd[1444]: time="2025-02-13T15:17:26.144191861Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:17:26.146997 containerd[1444]: time="2025-02-13T15:17:26.146962788Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:17:26.147949 containerd[1444]: time="2025-02-13T15:17:26.147291373Z" level=info msg="Start subscribing containerd event" Feb 13 15:17:26.147949 containerd[1444]: time="2025-02-13T15:17:26.147670064Z" level=info msg="Start recovering state" Feb 13 15:17:26.147949 containerd[1444]: time="2025-02-13T15:17:26.147796174Z" level=info msg="Start event monitor" Feb 13 15:17:26.147949 containerd[1444]: time="2025-02-13T15:17:26.147819019Z" level=info msg="Start snapshots syncer" Feb 13 15:17:26.147949 containerd[1444]: time="2025-02-13T15:17:26.147836043Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:17:26.147949 containerd[1444]: time="2025-02-13T15:17:26.147844394Z" level=info msg="Start streaming server" Feb 13 15:17:26.150619 containerd[1444]: time="2025-02-13T15:17:26.148801159Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:17:26.150619 containerd[1444]: time="2025-02-13T15:17:26.148943811Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:17:26.149104 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:17:26.150880 containerd[1444]: time="2025-02-13T15:17:26.150797199Z" level=info msg="containerd successfully booted in 0.043319s" Feb 13 15:17:26.270980 tar[1439]: linux-arm64/LICENSE Feb 13 15:17:26.271084 tar[1439]: linux-arm64/README.md Feb 13 15:17:26.284059 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:17:26.598490 sshd_keygen[1430]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:17:26.616592 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:17:26.631642 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:17:26.638014 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:17:26.638227 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:17:26.640902 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:17:26.652677 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:17:26.655567 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:17:26.657655 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:17:26.659115 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:17:27.064934 systemd-networkd[1390]: eth0: Gained IPv6LL Feb 13 15:17:27.067421 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:17:27.069216 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:17:27.078631 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:17:27.081111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:27.083409 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:17:27.099963 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:17:27.101738 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:17:27.101901 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:17:27.104083 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:17:27.572973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:27.574790 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:17:27.577337 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:27.579464 systemd[1]: Startup finished in 556ms (kernel) + 6.138s (initrd) + 3.772s (userspace) = 10.467s. Feb 13 15:17:28.183412 kubelet[1526]: E0213 15:17:28.183258 1526 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:28.186678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:28.186822 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:30.225007 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:17:30.226458 systemd[1]: Started sshd@0-10.0.0.48:22-10.0.0.1:39112.service - OpenSSH per-connection server daemon (10.0.0.1:39112). Feb 13 15:17:30.328181 sshd[1541]: Accepted publickey for core from 10.0.0.1 port 39112 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:17:30.332101 sshd-session[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:30.356531 systemd-logind[1423]: New session 1 of user core. Feb 13 15:17:30.357565 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:17:30.370709 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:17:30.383418 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:17:30.386117 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:17:30.393710 (systemd)[1545]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:17:30.470254 systemd[1545]: Queued start job for default target default.target. Feb 13 15:17:30.481347 systemd[1545]: Created slice app.slice - User Application Slice. Feb 13 15:17:30.481400 systemd[1545]: Reached target paths.target - Paths. Feb 13 15:17:30.481413 systemd[1545]: Reached target timers.target - Timers. Feb 13 15:17:30.482783 systemd[1545]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:17:30.497742 systemd[1545]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:17:30.497881 systemd[1545]: Reached target sockets.target - Sockets. Feb 13 15:17:30.497895 systemd[1545]: Reached target basic.target - Basic System. Feb 13 15:17:30.497941 systemd[1545]: Reached target default.target - Main User Target. Feb 13 15:17:30.497969 systemd[1545]: Startup finished in 98ms. Feb 13 15:17:30.498071 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:17:30.499745 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:17:30.569499 systemd[1]: Started sshd@1-10.0.0.48:22-10.0.0.1:39122.service - OpenSSH per-connection server daemon (10.0.0.1:39122). Feb 13 15:17:30.615561 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 39122 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:17:30.616865 sshd-session[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:30.622211 systemd-logind[1423]: New session 2 of user core. Feb 13 15:17:30.633568 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:17:30.690964 sshd[1558]: Connection closed by 10.0.0.1 port 39122 Feb 13 15:17:30.691901 sshd-session[1556]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:30.704691 systemd[1]: sshd@1-10.0.0.48:22-10.0.0.1:39122.service: Deactivated successfully. Feb 13 15:17:30.707193 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:17:30.710261 systemd-logind[1423]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:17:30.730843 systemd[1]: Started sshd@2-10.0.0.48:22-10.0.0.1:39126.service - OpenSSH per-connection server daemon (10.0.0.1:39126). Feb 13 15:17:30.731510 systemd-logind[1423]: Removed session 2. Feb 13 15:17:30.773457 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 39126 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:17:30.774877 sshd-session[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:30.781025 systemd-logind[1423]: New session 3 of user core. Feb 13 15:17:30.790635 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:17:30.841415 sshd[1565]: Connection closed by 10.0.0.1 port 39126 Feb 13 15:17:30.842174 sshd-session[1563]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:30.863693 systemd[1]: sshd@2-10.0.0.48:22-10.0.0.1:39126.service: Deactivated successfully. Feb 13 15:17:30.866999 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:17:30.870951 systemd-logind[1423]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:17:30.879731 systemd[1]: Started sshd@3-10.0.0.48:22-10.0.0.1:39140.service - OpenSSH per-connection server daemon (10.0.0.1:39140). Feb 13 15:17:30.881283 systemd-logind[1423]: Removed session 3. Feb 13 15:17:30.936918 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 39140 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:17:30.938935 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:30.944310 systemd-logind[1423]: New session 4 of user core. Feb 13 15:17:30.966722 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:17:31.025691 sshd[1572]: Connection closed by 10.0.0.1 port 39140 Feb 13 15:17:31.026488 sshd-session[1570]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:31.048039 systemd[1]: sshd@3-10.0.0.48:22-10.0.0.1:39140.service: Deactivated successfully. Feb 13 15:17:31.049899 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:17:31.052707 systemd-logind[1423]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:17:31.064866 systemd[1]: Started sshd@4-10.0.0.48:22-10.0.0.1:39152.service - OpenSSH per-connection server daemon (10.0.0.1:39152). Feb 13 15:17:31.065743 systemd-logind[1423]: Removed session 4. Feb 13 15:17:31.114717 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 39152 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:17:31.113475 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:31.119510 systemd-logind[1423]: New session 5 of user core. Feb 13 15:17:31.126620 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:17:31.195166 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:17:31.195540 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:17:31.537649 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:17:31.537779 (dockerd)[1600]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:17:31.833399 dockerd[1600]: time="2025-02-13T15:17:31.833230237Z" level=info msg="Starting up" Feb 13 15:17:32.070863 dockerd[1600]: time="2025-02-13T15:17:32.070444511Z" level=info msg="Loading containers: start." Feb 13 15:17:32.236433 kernel: Initializing XFRM netlink socket Feb 13 15:17:32.308296 systemd-networkd[1390]: docker0: Link UP Feb 13 15:17:32.350077 dockerd[1600]: time="2025-02-13T15:17:32.350025724Z" level=info msg="Loading containers: done." Feb 13 15:17:32.370765 dockerd[1600]: time="2025-02-13T15:17:32.370702988Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:17:32.370943 dockerd[1600]: time="2025-02-13T15:17:32.370815272Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:17:32.370980 dockerd[1600]: time="2025-02-13T15:17:32.370945843Z" level=info msg="Daemon has completed initialization" Feb 13 15:17:32.403846 dockerd[1600]: time="2025-02-13T15:17:32.403672794Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:17:32.404191 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:17:32.979923 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3130659472-merged.mount: Deactivated successfully. Feb 13 15:17:33.100145 containerd[1444]: time="2025-02-13T15:17:33.100088664Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:17:33.775648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2090729692.mount: Deactivated successfully. Feb 13 15:17:34.999012 containerd[1444]: time="2025-02-13T15:17:34.998803972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:34.999961 containerd[1444]: time="2025-02-13T15:17:34.999699927Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=32205863" Feb 13 15:17:35.000828 containerd[1444]: time="2025-02-13T15:17:35.000782641Z" level=info msg="ImageCreate event name:\"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:35.004541 containerd[1444]: time="2025-02-13T15:17:35.004488701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:35.005210 containerd[1444]: time="2025-02-13T15:17:35.005184635Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"32202661\" in 1.905050587s" Feb 13 15:17:35.005250 containerd[1444]: time="2025-02-13T15:17:35.005224197Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\"" Feb 13 15:17:35.023972 containerd[1444]: time="2025-02-13T15:17:35.023926054Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:17:36.378054 containerd[1444]: time="2025-02-13T15:17:36.378002754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:36.379024 containerd[1444]: time="2025-02-13T15:17:36.378807169Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=29383093" Feb 13 15:17:36.379778 containerd[1444]: time="2025-02-13T15:17:36.379717912Z" level=info msg="ImageCreate event name:\"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:36.382669 containerd[1444]: time="2025-02-13T15:17:36.382639112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:36.383969 containerd[1444]: time="2025-02-13T15:17:36.383834573Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"30786820\" in 1.35986559s" Feb 13 15:17:36.383969 containerd[1444]: time="2025-02-13T15:17:36.383871365Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\"" Feb 13 15:17:36.402582 containerd[1444]: time="2025-02-13T15:17:36.402539560Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:17:37.516722 containerd[1444]: time="2025-02-13T15:17:37.516662726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:37.517144 containerd[1444]: time="2025-02-13T15:17:37.517104216Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=15766982" Feb 13 15:17:37.518611 containerd[1444]: time="2025-02-13T15:17:37.518001023Z" level=info msg="ImageCreate event name:\"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:37.521493 containerd[1444]: time="2025-02-13T15:17:37.521437772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:37.522552 containerd[1444]: time="2025-02-13T15:17:37.522516393Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"17170727\" in 1.119936114s" Feb 13 15:17:37.522552 containerd[1444]: time="2025-02-13T15:17:37.522551136Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\"" Feb 13 15:17:37.542178 containerd[1444]: time="2025-02-13T15:17:37.542136895Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:17:38.298840 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:17:38.308925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:38.408572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:38.417162 (kubelet)[1898]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:38.461787 kubelet[1898]: E0213 15:17:38.461649 1898 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:38.466426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:38.466568 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:38.608840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount620730719.mount: Deactivated successfully. Feb 13 15:17:38.838676 containerd[1444]: time="2025-02-13T15:17:38.838609413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:38.839694 containerd[1444]: time="2025-02-13T15:17:38.839653651Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=25273377" Feb 13 15:17:38.840861 containerd[1444]: time="2025-02-13T15:17:38.840696165Z" level=info msg="ImageCreate event name:\"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:38.844024 containerd[1444]: time="2025-02-13T15:17:38.843979778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:38.845319 containerd[1444]: time="2025-02-13T15:17:38.845284464Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"25272394\" in 1.303110461s" Feb 13 15:17:38.845411 containerd[1444]: time="2025-02-13T15:17:38.845320245Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\"" Feb 13 15:17:38.865293 containerd[1444]: time="2025-02-13T15:17:38.865165848Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:17:39.516621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1290798479.mount: Deactivated successfully. Feb 13 15:17:40.560325 containerd[1444]: time="2025-02-13T15:17:40.560252782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:40.560775 containerd[1444]: time="2025-02-13T15:17:40.560724856Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 15:17:40.561914 containerd[1444]: time="2025-02-13T15:17:40.561851481Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:40.567397 containerd[1444]: time="2025-02-13T15:17:40.565707556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:40.567397 containerd[1444]: time="2025-02-13T15:17:40.566910456Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.70169773s" Feb 13 15:17:40.567397 containerd[1444]: time="2025-02-13T15:17:40.566936776Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:17:40.589197 containerd[1444]: time="2025-02-13T15:17:40.589152875Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:17:41.106985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3722662135.mount: Deactivated successfully. Feb 13 15:17:41.112411 containerd[1444]: time="2025-02-13T15:17:41.112228742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:41.112786 containerd[1444]: time="2025-02-13T15:17:41.112740908Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 15:17:41.113757 containerd[1444]: time="2025-02-13T15:17:41.113720098Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:41.115845 containerd[1444]: time="2025-02-13T15:17:41.115814589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:41.116644 containerd[1444]: time="2025-02-13T15:17:41.116619932Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 527.419626ms" Feb 13 15:17:41.116699 containerd[1444]: time="2025-02-13T15:17:41.116649774Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:17:41.136107 containerd[1444]: time="2025-02-13T15:17:41.135876534Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:17:41.927982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2883256754.mount: Deactivated successfully. Feb 13 15:17:43.596269 containerd[1444]: time="2025-02-13T15:17:43.596212826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:43.597317 containerd[1444]: time="2025-02-13T15:17:43.597056718Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Feb 13 15:17:43.598361 containerd[1444]: time="2025-02-13T15:17:43.597939980Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:43.601388 containerd[1444]: time="2025-02-13T15:17:43.601199405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:43.602684 containerd[1444]: time="2025-02-13T15:17:43.602494100Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.466577751s" Feb 13 15:17:43.602684 containerd[1444]: time="2025-02-13T15:17:43.602527862Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Feb 13 15:17:48.630963 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:17:48.640633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:48.650239 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:17:48.650305 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:17:48.650568 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:48.653596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:48.672413 systemd[1]: Reloading requested from client PID 2107 ('systemctl') (unit session-5.scope)... Feb 13 15:17:48.672431 systemd[1]: Reloading... Feb 13 15:17:48.741501 zram_generator::config[2144]: No configuration found. Feb 13 15:17:49.080315 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:17:49.132917 systemd[1]: Reloading finished in 460 ms. Feb 13 15:17:49.176029 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:17:49.176092 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:17:49.176293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:49.178762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:49.274505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:49.278612 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:17:49.320261 kubelet[2192]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:17:49.320261 kubelet[2192]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:17:49.320261 kubelet[2192]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:17:49.320680 kubelet[2192]: I0213 15:17:49.320307 2192 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:17:50.629707 kubelet[2192]: I0213 15:17:50.629665 2192 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:17:50.630130 kubelet[2192]: I0213 15:17:50.630113 2192 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:17:50.630534 kubelet[2192]: I0213 15:17:50.630513 2192 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:17:50.677286 kubelet[2192]: E0213 15:17:50.677248 2192 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:17:50.679585 kubelet[2192]: I0213 15:17:50.679285 2192 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:17:50.686860 kubelet[2192]: I0213 15:17:50.686829 2192 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:17:50.687078 kubelet[2192]: I0213 15:17:50.687065 2192 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:17:50.687300 kubelet[2192]: I0213 15:17:50.687282 2192 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:17:50.687408 kubelet[2192]: I0213 15:17:50.687308 2192 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:17:50.687408 kubelet[2192]: I0213 15:17:50.687317 2192 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:17:50.687485 kubelet[2192]: I0213 15:17:50.687469 2192 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:17:50.689953 kubelet[2192]: I0213 15:17:50.689919 2192 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:17:50.689953 kubelet[2192]: I0213 15:17:50.689952 2192 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:17:50.690026 kubelet[2192]: I0213 15:17:50.689977 2192 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:17:50.690026 kubelet[2192]: I0213 15:17:50.689991 2192 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:17:50.690784 kubelet[2192]: W0213 15:17:50.690467 2192 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:17:50.690784 kubelet[2192]: E0213 15:17:50.690544 2192 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:17:50.691738 kubelet[2192]: W0213 15:17:50.691671 2192 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:17:50.691738 kubelet[2192]: E0213 15:17:50.691720 2192 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:17:50.692396 kubelet[2192]: I0213 15:17:50.692204 2192 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:17:50.692756 kubelet[2192]: I0213 15:17:50.692740 2192 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:17:50.695292 kubelet[2192]: W0213 15:17:50.695249 2192 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:17:50.696212 kubelet[2192]: I0213 15:17:50.696173 2192 server.go:1256] "Started kubelet" Feb 13 15:17:50.696828 kubelet[2192]: I0213 15:17:50.696798 2192 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:17:50.697191 kubelet[2192]: I0213 15:17:50.697156 2192 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:17:50.698026 kubelet[2192]: I0213 15:17:50.697991 2192 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:17:50.698026 kubelet[2192]: I0213 15:17:50.698011 2192 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:17:50.698294 kubelet[2192]: I0213 15:17:50.698180 2192 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:17:50.699568 kubelet[2192]: I0213 15:17:50.699541 2192 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:17:50.700168 kubelet[2192]: I0213 15:17:50.699644 2192 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:17:50.700168 kubelet[2192]: I0213 15:17:50.699721 2192 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:17:50.700168 kubelet[2192]: W0213 15:17:50.700077 2192 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:17:50.700168 kubelet[2192]: E0213 15:17:50.700118 2192 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:17:50.700386 kubelet[2192]: E0213 15:17:50.700348 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="200ms" Feb 13 15:17:50.705286 kubelet[2192]: I0213 15:17:50.702111 2192 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:17:50.705286 kubelet[2192]: I0213 15:17:50.702240 2192 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:17:50.709152 kubelet[2192]: I0213 15:17:50.709126 2192 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:17:50.710153 kubelet[2192]: E0213 15:17:50.710099 2192 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cd894de3f97e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:17:50.69614323 +0000 UTC m=+1.414299287,LastTimestamp:2025-02-13 15:17:50.69614323 +0000 UTC m=+1.414299287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:17:50.710806 kubelet[2192]: E0213 15:17:50.710770 2192 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:17:50.723268 kubelet[2192]: I0213 15:17:50.723121 2192 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:17:50.726270 kubelet[2192]: I0213 15:17:50.725574 2192 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:17:50.726270 kubelet[2192]: I0213 15:17:50.725603 2192 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:17:50.726270 kubelet[2192]: I0213 15:17:50.725621 2192 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:17:50.726270 kubelet[2192]: E0213 15:17:50.725681 2192 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:17:50.726270 kubelet[2192]: W0213 15:17:50.726182 2192 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:17:50.726270 kubelet[2192]: E0213 15:17:50.726238 2192 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:17:50.727454 kubelet[2192]: I0213 15:17:50.727327 2192 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:17:50.727454 kubelet[2192]: I0213 15:17:50.727347 2192 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:17:50.727454 kubelet[2192]: I0213 15:17:50.727392 2192 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:17:50.801322 kubelet[2192]: I0213 15:17:50.801278 2192 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:17:50.801744 kubelet[2192]: E0213 15:17:50.801727 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Feb 13 15:17:50.826027 kubelet[2192]: E0213 15:17:50.825993 2192 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:17:50.870505 kubelet[2192]: I0213 15:17:50.870473 2192 policy_none.go:49] "None policy: Start" Feb 13 15:17:50.871214 kubelet[2192]: I0213 15:17:50.871180 2192 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:17:50.871294 kubelet[2192]: I0213 15:17:50.871230 2192 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:17:50.889775 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:17:50.900801 kubelet[2192]: E0213 15:17:50.900747 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="400ms" Feb 13 15:17:50.905486 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:17:50.919917 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:17:50.921291 kubelet[2192]: I0213 15:17:50.921259 2192 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:17:50.921697 kubelet[2192]: I0213 15:17:50.921578 2192 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:17:50.922901 kubelet[2192]: E0213 15:17:50.922880 2192 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:17:51.003522 kubelet[2192]: I0213 15:17:51.003486 2192 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:17:51.003874 kubelet[2192]: E0213 15:17:51.003844 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Feb 13 15:17:51.027062 kubelet[2192]: I0213 15:17:51.027021 2192 topology_manager.go:215] "Topology Admit Handler" podUID="927eb4369e17571e4d8ae8e93ce815d5" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:17:51.028302 kubelet[2192]: I0213 15:17:51.028157 2192 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:17:51.029214 kubelet[2192]: I0213 15:17:51.029156 2192 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:17:51.035644 systemd[1]: Created slice kubepods-burstable-pod927eb4369e17571e4d8ae8e93ce815d5.slice - libcontainer container kubepods-burstable-pod927eb4369e17571e4d8ae8e93ce815d5.slice. Feb 13 15:17:51.049981 systemd[1]: Created slice kubepods-burstable-pod8dd79284f50d348595750c57a6b03620.slice - libcontainer container kubepods-burstable-pod8dd79284f50d348595750c57a6b03620.slice. Feb 13 15:17:51.065414 systemd[1]: Created slice kubepods-burstable-pod34a43d8200b04e3b81251db6a65bc0ce.slice - libcontainer container kubepods-burstable-pod34a43d8200b04e3b81251db6a65bc0ce.slice. Feb 13 15:17:51.101889 kubelet[2192]: I0213 15:17:51.101849 2192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:17:51.101889 kubelet[2192]: I0213 15:17:51.101901 2192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:17:51.102036 kubelet[2192]: I0213 15:17:51.101924 2192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:17:51.102036 kubelet[2192]: I0213 15:17:51.101993 2192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:17:51.102121 kubelet[2192]: I0213 15:17:51.102050 2192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:17:51.102121 kubelet[2192]: I0213 15:17:51.102077 2192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:17:51.102177 kubelet[2192]: I0213 15:17:51.102139 2192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/927eb4369e17571e4d8ae8e93ce815d5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"927eb4369e17571e4d8ae8e93ce815d5\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:17:51.102177 kubelet[2192]: I0213 15:17:51.102160 2192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/927eb4369e17571e4d8ae8e93ce815d5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"927eb4369e17571e4d8ae8e93ce815d5\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:17:51.102217 kubelet[2192]: I0213 15:17:51.102181 2192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/927eb4369e17571e4d8ae8e93ce815d5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"927eb4369e17571e4d8ae8e93ce815d5\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:17:51.302107 kubelet[2192]: E0213 15:17:51.301966 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="800ms" Feb 13 15:17:51.349660 kubelet[2192]: E0213 15:17:51.349386 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:17:51.350222 containerd[1444]: time="2025-02-13T15:17:51.350175179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:927eb4369e17571e4d8ae8e93ce815d5,Namespace:kube-system,Attempt:0,}" Feb 13 15:17:51.362433 kubelet[2192]: E0213 15:17:51.362399 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:17:51.362947 containerd[1444]: time="2025-02-13T15:17:51.362911823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,}" Feb 13 15:17:51.368201 kubelet[2192]: E0213 15:17:51.368168 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:17:51.368654 containerd[1444]: time="2025-02-13T15:17:51.368612348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,}" Feb 13 15:17:51.406020 kubelet[2192]: I0213 15:17:51.405951 2192 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:17:51.406286 kubelet[2192]: E0213 15:17:51.406257 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Feb 13 15:17:51.723185 kubelet[2192]: W0213 15:17:51.723044 2192 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:17:51.723185 kubelet[2192]: E0213 15:17:51.723116 2192 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:17:51.840186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1064468225.mount: Deactivated successfully. Feb 13 15:17:51.843853 containerd[1444]: time="2025-02-13T15:17:51.843787495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:51.845996 containerd[1444]: time="2025-02-13T15:17:51.845872567Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 15:17:51.847468 containerd[1444]: time="2025-02-13T15:17:51.847433650Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:51.848182 containerd[1444]: time="2025-02-13T15:17:51.848127206Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:17:51.849038 containerd[1444]: time="2025-02-13T15:17:51.848978480Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:51.851802 containerd[1444]: time="2025-02-13T15:17:51.850892345Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:17:51.851802 containerd[1444]: time="2025-02-13T15:17:51.851024123Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:51.853078 containerd[1444]: time="2025-02-13T15:17:51.853030778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:51.854668 containerd[1444]: time="2025-02-13T15:17:51.854635573Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 504.37601ms" Feb 13 15:17:51.856240 containerd[1444]: time="2025-02-13T15:17:51.856193733Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 493.20253ms" Feb 13 15:17:51.858459 containerd[1444]: time="2025-02-13T15:17:51.858414626Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 489.722579ms" Feb 13 15:17:51.910851 kubelet[2192]: W0213 15:17:51.910789 2192 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:17:51.911040 kubelet[2192]: E0213 15:17:51.911025 2192 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:17:52.010439 containerd[1444]: time="2025-02-13T15:17:52.009506434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:17:52.010439 containerd[1444]: time="2025-02-13T15:17:52.010315679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:17:52.010439 containerd[1444]: time="2025-02-13T15:17:52.010335293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:52.011001 containerd[1444]: time="2025-02-13T15:17:52.010730889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:17:52.011001 containerd[1444]: time="2025-02-13T15:17:52.010816269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:17:52.011001 containerd[1444]: time="2025-02-13T15:17:52.010832160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:52.011001 containerd[1444]: time="2025-02-13T15:17:52.010934351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:52.011001 containerd[1444]: time="2025-02-13T15:17:52.010705431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:52.014393 containerd[1444]: time="2025-02-13T15:17:52.013265939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:17:52.014393 containerd[1444]: time="2025-02-13T15:17:52.013324340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:17:52.014393 containerd[1444]: time="2025-02-13T15:17:52.013339911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:52.014393 containerd[1444]: time="2025-02-13T15:17:52.013460435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:52.034621 systemd[1]: Started cri-containerd-402ba754b49fa4bcfb7f61fa54cb2cf51d1d930c350a36baa407468a9f35e210.scope - libcontainer container 402ba754b49fa4bcfb7f61fa54cb2cf51d1d930c350a36baa407468a9f35e210. Feb 13 15:17:52.035926 systemd[1]: Started cri-containerd-7e2be139be935d6896ad5dad14c41cc24a2e95a17c991c2862ef857a810d2597.scope - libcontainer container 7e2be139be935d6896ad5dad14c41cc24a2e95a17c991c2862ef857a810d2597. Feb 13 15:17:52.042551 systemd[1]: Started cri-containerd-eca6d3413f273cbc07d3e20efa708bdece49e8ee123c290043af3470df7c0f85.scope - libcontainer container eca6d3413f273cbc07d3e20efa708bdece49e8ee123c290043af3470df7c0f85. Feb 13 15:17:52.076176 containerd[1444]: time="2025-02-13T15:17:52.076037961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,} returns sandbox id \"402ba754b49fa4bcfb7f61fa54cb2cf51d1d930c350a36baa407468a9f35e210\"" Feb 13 15:17:52.077202 kubelet[2192]: E0213 15:17:52.077174 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:17:52.082306 containerd[1444]: time="2025-02-13T15:17:52.082048477Z" level=info msg="CreateContainer within sandbox \"402ba754b49fa4bcfb7f61fa54cb2cf51d1d930c350a36baa407468a9f35e210\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:17:52.084288 containerd[1444]: time="2025-02-13T15:17:52.084183448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:927eb4369e17571e4d8ae8e93ce815d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"eca6d3413f273cbc07d3e20efa708bdece49e8ee123c290043af3470df7c0f85\"" Feb 13 15:17:52.085088 kubelet[2192]: E0213 15:17:52.085062 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:17:52.088461 containerd[1444]: time="2025-02-13T15:17:52.088420606Z" level=info msg="CreateContainer within sandbox \"eca6d3413f273cbc07d3e20efa708bdece49e8ee123c290043af3470df7c0f85\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:17:52.089736 containerd[1444]: time="2025-02-13T15:17:52.089698338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e2be139be935d6896ad5dad14c41cc24a2e95a17c991c2862ef857a810d2597\"" Feb 13 15:17:52.090515 kubelet[2192]: E0213 15:17:52.090478 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:17:52.092125 kubelet[2192]: W0213 15:17:52.092068 2192 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:17:52.092193 kubelet[2192]: E0213 15:17:52.092137 2192 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:17:52.093305 containerd[1444]: time="2025-02-13T15:17:52.093257062Z" level=info msg="CreateContainer within sandbox \"7e2be139be935d6896ad5dad14c41cc24a2e95a17c991c2862ef857a810d2597\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:17:52.103186 kubelet[2192]: E0213 15:17:52.103126 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="1.6s" Feb 13 15:17:52.104876 containerd[1444]: time="2025-02-13T15:17:52.104687762Z" level=info msg="CreateContainer within sandbox \"402ba754b49fa4bcfb7f61fa54cb2cf51d1d930c350a36baa407468a9f35e210\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"06532a77dce57304bad88f7cc8bc9a7f6976a20fca9c3a3d70b95f02d01f7fe5\"" Feb 13 15:17:52.106417 containerd[1444]: time="2025-02-13T15:17:52.105739296Z" level=info msg="StartContainer for \"06532a77dce57304bad88f7cc8bc9a7f6976a20fca9c3a3d70b95f02d01f7fe5\"" Feb 13 15:17:52.110700 containerd[1444]: time="2025-02-13T15:17:52.110551095Z" level=info msg="CreateContainer within sandbox \"eca6d3413f273cbc07d3e20efa708bdece49e8ee123c290043af3470df7c0f85\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"574a654f383c7843ac3a40b9c14046d7122694255d965174000d9456c9730126\"" Feb 13 15:17:52.111666 containerd[1444]: time="2025-02-13T15:17:52.111432951Z" level=info msg="StartContainer for \"574a654f383c7843ac3a40b9c14046d7122694255d965174000d9456c9730126\"" Feb 13 15:17:52.112825 containerd[1444]: time="2025-02-13T15:17:52.112524113Z" level=info msg="CreateContainer within sandbox \"7e2be139be935d6896ad5dad14c41cc24a2e95a17c991c2862ef857a810d2597\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a111148559566dd7cffa3523380789fc2a62864ca4760dac2048e39de2b60e16\"" Feb 13 15:17:52.113086 containerd[1444]: time="2025-02-13T15:17:52.113061208Z" level=info msg="StartContainer for \"a111148559566dd7cffa3523380789fc2a62864ca4760dac2048e39de2b60e16\"" Feb 13 15:17:52.133201 systemd[1]: Started cri-containerd-06532a77dce57304bad88f7cc8bc9a7f6976a20fca9c3a3d70b95f02d01f7fe5.scope - libcontainer container 06532a77dce57304bad88f7cc8bc9a7f6976a20fca9c3a3d70b95f02d01f7fe5. Feb 13 15:17:52.137133 systemd[1]: Started cri-containerd-574a654f383c7843ac3a40b9c14046d7122694255d965174000d9456c9730126.scope - libcontainer container 574a654f383c7843ac3a40b9c14046d7122694255d965174000d9456c9730126. Feb 13 15:17:52.142402 systemd[1]: Started cri-containerd-a111148559566dd7cffa3523380789fc2a62864ca4760dac2048e39de2b60e16.scope - libcontainer container a111148559566dd7cffa3523380789fc2a62864ca4760dac2048e39de2b60e16. Feb 13 15:17:52.182210 containerd[1444]: time="2025-02-13T15:17:52.182158886Z" level=info msg="StartContainer for \"06532a77dce57304bad88f7cc8bc9a7f6976a20fca9c3a3d70b95f02d01f7fe5\" returns successfully" Feb 13 15:17:52.182335 containerd[1444]: time="2025-02-13T15:17:52.182283093Z" level=info msg="StartContainer for \"a111148559566dd7cffa3523380789fc2a62864ca4760dac2048e39de2b60e16\" returns successfully" Feb 13 15:17:52.212171 kubelet[2192]: I0213 15:17:52.211538 2192 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:17:52.212171 kubelet[2192]: E0213 15:17:52.211904 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Feb 13 15:17:52.213124 containerd[1444]: time="2025-02-13T15:17:52.213001418Z" level=info msg="StartContainer for \"574a654f383c7843ac3a40b9c14046d7122694255d965174000d9456c9730126\" returns successfully" Feb 13 15:17:52.276296 kubelet[2192]: W0213 15:17:52.276129 2192 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:17:52.276296 kubelet[2192]: E0213 15:17:52.276184 2192 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:17:52.735486 kubelet[2192]: E0213 15:17:52.735085 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:17:52.738907 kubelet[2192]: E0213 15:17:52.738793 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:17:52.741955 kubelet[2192]: E0213 15:17:52.741821 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:17:53.747053 kubelet[2192]: E0213 15:17:53.746756 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:17:53.747833 kubelet[2192]: E0213 15:17:53.747450 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:17:53.813718 kubelet[2192]: I0213 15:17:53.813687 2192 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:17:54.220681 kubelet[2192]: E0213 15:17:54.219573 2192 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:17:54.304973 kubelet[2192]: I0213 15:17:54.304914 2192 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:17:54.326596 kubelet[2192]: E0213 15:17:54.326419 2192 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:17:54.427050 kubelet[2192]: E0213 15:17:54.427007 2192 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:17:54.692457 kubelet[2192]: I0213 15:17:54.692319 2192 apiserver.go:52] "Watching apiserver" Feb 13 15:17:54.700019 kubelet[2192]: I0213 15:17:54.699968 2192 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:17:56.846048 kubelet[2192]: E0213 15:17:56.846006 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:17:56.927079 systemd[1]: Reloading requested from client PID 2474 ('systemctl') (unit session-5.scope)... Feb 13 15:17:56.927097 systemd[1]: Reloading... Feb 13 15:17:56.992407 zram_generator::config[2516]: No configuration found. Feb 13 15:17:57.075091 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:17:57.139445 systemd[1]: Reloading finished in 212 ms. Feb 13 15:17:57.168560 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:57.183493 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:17:57.183723 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:57.183779 systemd[1]: kubelet.service: Consumed 1.831s CPU time, 114.5M memory peak, 0B memory swap peak. Feb 13 15:17:57.194745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:57.287837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:57.293872 (kubelet)[2555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:17:57.340632 kubelet[2555]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:17:57.340632 kubelet[2555]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:17:57.340632 kubelet[2555]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:17:57.340998 kubelet[2555]: I0213 15:17:57.340680 2555 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:17:57.344909 kubelet[2555]: I0213 15:17:57.344865 2555 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:17:57.344909 kubelet[2555]: I0213 15:17:57.344897 2555 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:17:57.345087 kubelet[2555]: I0213 15:17:57.345070 2555 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:17:57.346615 kubelet[2555]: I0213 15:17:57.346587 2555 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:17:57.348442 kubelet[2555]: I0213 15:17:57.348408 2555 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:17:57.354512 kubelet[2555]: I0213 15:17:57.354481 2555 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:17:57.354699 kubelet[2555]: I0213 15:17:57.354688 2555 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:17:57.354866 kubelet[2555]: I0213 15:17:57.354851 2555 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:17:57.354938 kubelet[2555]: I0213 15:17:57.354873 2555 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:17:57.354938 kubelet[2555]: I0213 15:17:57.354882 2555 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:17:57.354938 kubelet[2555]: I0213 15:17:57.354911 2555 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:17:57.355140 kubelet[2555]: I0213 15:17:57.355101 2555 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:17:57.355140 kubelet[2555]: I0213 15:17:57.355124 2555 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:17:57.355326 kubelet[2555]: I0213 15:17:57.355303 2555 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:17:57.355914 kubelet[2555]: I0213 15:17:57.355414 2555 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:17:57.356231 kubelet[2555]: I0213 15:17:57.356208 2555 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:17:57.356547 kubelet[2555]: I0213 15:17:57.356529 2555 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:17:57.357084 kubelet[2555]: I0213 15:17:57.357060 2555 server.go:1256] "Started kubelet" Feb 13 15:17:57.360688 kubelet[2555]: I0213 15:17:57.360657 2555 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:17:57.362067 kubelet[2555]: I0213 15:17:57.362024 2555 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:17:57.364570 kubelet[2555]: E0213 15:17:57.364023 2555 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:17:57.364570 kubelet[2555]: I0213 15:17:57.364058 2555 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:17:57.364570 kubelet[2555]: I0213 15:17:57.364165 2555 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:17:57.364570 kubelet[2555]: I0213 15:17:57.364297 2555 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:17:57.367785 kubelet[2555]: I0213 15:17:57.361096 2555 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:17:57.367785 kubelet[2555]: I0213 15:17:57.366383 2555 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:17:57.368111 kubelet[2555]: I0213 15:17:57.368083 2555 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:17:57.368175 kubelet[2555]: I0213 15:17:57.368154 2555 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:17:57.370298 kubelet[2555]: I0213 15:17:57.368187 2555 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:17:57.379014 kubelet[2555]: E0213 15:17:57.378986 2555 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:17:57.381065 kubelet[2555]: I0213 15:17:57.381037 2555 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:17:57.385385 kubelet[2555]: I0213 15:17:57.385335 2555 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:17:57.388247 kubelet[2555]: I0213 15:17:57.388202 2555 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:17:57.388247 kubelet[2555]: I0213 15:17:57.388237 2555 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:17:57.388247 kubelet[2555]: I0213 15:17:57.388255 2555 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:17:57.388448 kubelet[2555]: E0213 15:17:57.388304 2555 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:17:57.424802 kubelet[2555]: I0213 15:17:57.423706 2555 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:17:57.425123 kubelet[2555]: I0213 15:17:57.425101 2555 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:17:57.425204 kubelet[2555]: I0213 15:17:57.425195 2555 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:17:57.425536 kubelet[2555]: I0213 15:17:57.425516 2555 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:17:57.425621 kubelet[2555]: I0213 15:17:57.425609 2555 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:17:57.425694 kubelet[2555]: I0213 15:17:57.425684 2555 policy_none.go:49] "None policy: Start" Feb 13 15:17:57.426480 kubelet[2555]: I0213 15:17:57.426459 2555 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:17:57.426640 kubelet[2555]: I0213 15:17:57.426629 2555 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:17:57.426917 kubelet[2555]: I0213 15:17:57.426898 2555 state_mem.go:75] "Updated machine memory state" Feb 13 15:17:57.431860 kubelet[2555]: I0213 15:17:57.431834 2555 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:17:57.432100 kubelet[2555]: I0213 15:17:57.432076 2555 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:17:57.467900 kubelet[2555]: I0213 15:17:57.467868 2555 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:17:57.476327 kubelet[2555]: I0213 15:17:57.476260 2555 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 15:17:57.476580 kubelet[2555]: I0213 15:17:57.476565 2555 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:17:57.488666 kubelet[2555]: I0213 15:17:57.488638 2555 topology_manager.go:215] "Topology Admit Handler" podUID="927eb4369e17571e4d8ae8e93ce815d5" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:17:57.489149 kubelet[2555]: I0213 15:17:57.488917 2555 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:17:57.489149 kubelet[2555]: I0213 15:17:57.488995 2555 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:17:57.495625 kubelet[2555]: E0213 15:17:57.495594 2555 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 15:17:57.665599 kubelet[2555]: I0213 15:17:57.665561 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/927eb4369e17571e4d8ae8e93ce815d5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"927eb4369e17571e4d8ae8e93ce815d5\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:17:57.665599 kubelet[2555]: I0213 15:17:57.665605 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:17:57.665737 kubelet[2555]: I0213 15:17:57.665631 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:17:57.665737 kubelet[2555]: I0213 15:17:57.665650 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:17:57.665737 kubelet[2555]: I0213 15:17:57.665671 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:17:57.665737 kubelet[2555]: I0213 15:17:57.665691 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/927eb4369e17571e4d8ae8e93ce815d5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"927eb4369e17571e4d8ae8e93ce815d5\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:17:57.665737 kubelet[2555]: I0213 15:17:57.665711 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/927eb4369e17571e4d8ae8e93ce815d5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"927eb4369e17571e4d8ae8e93ce815d5\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:17:57.665869 kubelet[2555]: I0213 15:17:57.665730 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:17:57.665869 kubelet[2555]: I0213 15:17:57.665752 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:17:57.797504 kubelet[2555]: E0213 15:17:57.797394 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:17:57.797504 kubelet[2555]: E0213 15:17:57.797397 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:17:57.797797 kubelet[2555]: E0213 15:17:57.797635 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:17:58.132465 sudo[1580]: pam_unix(sudo:session): session closed for user root Feb 13 15:17:58.135635 sshd[1579]: Connection closed by 10.0.0.1 port 39152 Feb 13 15:17:58.135997 sshd-session[1577]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:58.141023 systemd[1]: sshd@4-10.0.0.48:22-10.0.0.1:39152.service: Deactivated successfully. Feb 13 15:17:58.143962 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:17:58.144136 systemd[1]: session-5.scope: Consumed 6.567s CPU time, 191.4M memory peak, 0B memory swap peak. Feb 13 15:17:58.144962 systemd-logind[1423]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:17:58.146160 systemd-logind[1423]: Removed session 5. Feb 13 15:17:58.356296 kubelet[2555]: I0213 15:17:58.356252 2555 apiserver.go:52] "Watching apiserver" Feb 13 15:17:58.365098 kubelet[2555]: I0213 15:17:58.365046 2555 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:17:58.402501 kubelet[2555]: E0213 15:17:58.402352 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:17:58.419681 kubelet[2555]: E0213 15:17:58.419632 2555 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 15:17:58.422390 kubelet[2555]: E0213 15:17:58.420090 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:17:58.422390 kubelet[2555]: E0213 15:17:58.419632 2555 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:17:58.422390 kubelet[2555]: E0213 15:17:58.420647 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:17:58.435647 kubelet[2555]: I0213 15:17:58.435606 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.435539972 podStartE2EDuration="1.435539972s" podCreationTimestamp="2025-02-13 15:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:17:58.429357361 +0000 UTC m=+1.130264432" watchObservedRunningTime="2025-02-13 15:17:58.435539972 +0000 UTC m=+1.136447043" Feb 13 15:17:58.454179 kubelet[2555]: I0213 15:17:58.454135 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.454080642 podStartE2EDuration="1.454080642s" podCreationTimestamp="2025-02-13 15:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:17:58.445074652 +0000 UTC m=+1.145981763" watchObservedRunningTime="2025-02-13 15:17:58.454080642 +0000 UTC m=+1.154987753" Feb 13 15:17:58.466566 kubelet[2555]: I0213 15:17:58.466532 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.466494447 podStartE2EDuration="2.466494447s" podCreationTimestamp="2025-02-13 15:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:17:58.454060672 +0000 UTC m=+1.154967743" watchObservedRunningTime="2025-02-13 15:17:58.466494447 +0000 UTC m=+1.167401558" Feb 13 15:17:59.403516 kubelet[2555]: E0213 15:17:59.403476 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:17:59.403846 kubelet[2555]: E0213 15:17:59.403763 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:00.405066 kubelet[2555]: E0213 15:18:00.405032 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:01.406559 kubelet[2555]: E0213 15:18:01.406454 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:01.802556 kubelet[2555]: E0213 15:18:01.802135 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:01.873645 kubelet[2555]: E0213 15:18:01.873609 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:10.871606 kubelet[2555]: E0213 15:18:10.871198 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:11.123581 update_engine[1429]: I20250213 15:18:11.123437 1429 update_attempter.cc:509] Updating boot flags... Feb 13 15:18:11.150420 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2629) Feb 13 15:18:11.207409 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2629) Feb 13 15:18:11.421006 kubelet[2555]: E0213 15:18:11.420891 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:11.604118 kubelet[2555]: I0213 15:18:11.604074 2555 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:18:11.604520 containerd[1444]: time="2025-02-13T15:18:11.604474336Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:18:11.604794 kubelet[2555]: I0213 15:18:11.604750 2555 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:18:11.809142 kubelet[2555]: E0213 15:18:11.809109 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:11.881125 kubelet[2555]: E0213 15:18:11.881085 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:12.786639 kubelet[2555]: I0213 15:18:12.786588 2555 topology_manager.go:215] "Topology Admit Handler" podUID="b08b4d8b-2c45-4e99-adda-22add4af9784" podNamespace="kube-system" podName="kube-proxy-g45sb" Feb 13 15:18:12.799203 kubelet[2555]: I0213 15:18:12.799152 2555 topology_manager.go:215] "Topology Admit Handler" podUID="d0a2ab71-96d2-49f9-a6e0-4fccd0981960" podNamespace="kube-flannel" podName="kube-flannel-ds-htwqc" Feb 13 15:18:12.804170 systemd[1]: Created slice kubepods-besteffort-podb08b4d8b_2c45_4e99_adda_22add4af9784.slice - libcontainer container kubepods-besteffort-podb08b4d8b_2c45_4e99_adda_22add4af9784.slice. Feb 13 15:18:12.818305 systemd[1]: Created slice kubepods-burstable-podd0a2ab71_96d2_49f9_a6e0_4fccd0981960.slice - libcontainer container kubepods-burstable-podd0a2ab71_96d2_49f9_a6e0_4fccd0981960.slice. Feb 13 15:18:12.875680 kubelet[2555]: I0213 15:18:12.875634 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d0a2ab71-96d2-49f9-a6e0-4fccd0981960-run\") pod \"kube-flannel-ds-htwqc\" (UID: \"d0a2ab71-96d2-49f9-a6e0-4fccd0981960\") " pod="kube-flannel/kube-flannel-ds-htwqc" Feb 13 15:18:12.875680 kubelet[2555]: I0213 15:18:12.875682 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j597p\" (UniqueName: \"kubernetes.io/projected/d0a2ab71-96d2-49f9-a6e0-4fccd0981960-kube-api-access-j597p\") pod \"kube-flannel-ds-htwqc\" (UID: \"d0a2ab71-96d2-49f9-a6e0-4fccd0981960\") " pod="kube-flannel/kube-flannel-ds-htwqc" Feb 13 15:18:12.875849 kubelet[2555]: I0213 15:18:12.875711 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b08b4d8b-2c45-4e99-adda-22add4af9784-xtables-lock\") pod \"kube-proxy-g45sb\" (UID: \"b08b4d8b-2c45-4e99-adda-22add4af9784\") " pod="kube-system/kube-proxy-g45sb" Feb 13 15:18:12.875849 kubelet[2555]: I0213 15:18:12.875731 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6tsb\" (UniqueName: \"kubernetes.io/projected/b08b4d8b-2c45-4e99-adda-22add4af9784-kube-api-access-f6tsb\") pod \"kube-proxy-g45sb\" (UID: \"b08b4d8b-2c45-4e99-adda-22add4af9784\") " pod="kube-system/kube-proxy-g45sb" Feb 13 15:18:12.875849 kubelet[2555]: I0213 15:18:12.875751 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b08b4d8b-2c45-4e99-adda-22add4af9784-lib-modules\") pod \"kube-proxy-g45sb\" (UID: \"b08b4d8b-2c45-4e99-adda-22add4af9784\") " pod="kube-system/kube-proxy-g45sb" Feb 13 15:18:12.875849 kubelet[2555]: I0213 15:18:12.875770 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/d0a2ab71-96d2-49f9-a6e0-4fccd0981960-flannel-cfg\") pod \"kube-flannel-ds-htwqc\" (UID: \"d0a2ab71-96d2-49f9-a6e0-4fccd0981960\") " pod="kube-flannel/kube-flannel-ds-htwqc" Feb 13 15:18:12.875849 kubelet[2555]: I0213 15:18:12.875792 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/d0a2ab71-96d2-49f9-a6e0-4fccd0981960-cni\") pod \"kube-flannel-ds-htwqc\" (UID: \"d0a2ab71-96d2-49f9-a6e0-4fccd0981960\") " pod="kube-flannel/kube-flannel-ds-htwqc" Feb 13 15:18:12.876015 kubelet[2555]: I0213 15:18:12.875812 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b08b4d8b-2c45-4e99-adda-22add4af9784-kube-proxy\") pod \"kube-proxy-g45sb\" (UID: \"b08b4d8b-2c45-4e99-adda-22add4af9784\") " pod="kube-system/kube-proxy-g45sb" Feb 13 15:18:12.876015 kubelet[2555]: I0213 15:18:12.875831 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/d0a2ab71-96d2-49f9-a6e0-4fccd0981960-cni-plugin\") pod \"kube-flannel-ds-htwqc\" (UID: \"d0a2ab71-96d2-49f9-a6e0-4fccd0981960\") " pod="kube-flannel/kube-flannel-ds-htwqc" Feb 13 15:18:12.876015 kubelet[2555]: I0213 15:18:12.875851 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0a2ab71-96d2-49f9-a6e0-4fccd0981960-xtables-lock\") pod \"kube-flannel-ds-htwqc\" (UID: \"d0a2ab71-96d2-49f9-a6e0-4fccd0981960\") " pod="kube-flannel/kube-flannel-ds-htwqc" Feb 13 15:18:13.114936 kubelet[2555]: E0213 15:18:13.114827 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:13.116266 containerd[1444]: time="2025-02-13T15:18:13.116217085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g45sb,Uid:b08b4d8b-2c45-4e99-adda-22add4af9784,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:13.123491 kubelet[2555]: E0213 15:18:13.123461 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:13.124963 containerd[1444]: time="2025-02-13T15:18:13.123988085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-htwqc,Uid:d0a2ab71-96d2-49f9-a6e0-4fccd0981960,Namespace:kube-flannel,Attempt:0,}" Feb 13 15:18:13.146329 containerd[1444]: time="2025-02-13T15:18:13.146147596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:13.146329 containerd[1444]: time="2025-02-13T15:18:13.146249494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:13.146329 containerd[1444]: time="2025-02-13T15:18:13.146273299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:13.146589 containerd[1444]: time="2025-02-13T15:18:13.146390080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:13.154984 containerd[1444]: time="2025-02-13T15:18:13.154840642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:13.154984 containerd[1444]: time="2025-02-13T15:18:13.154904013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:13.154984 containerd[1444]: time="2025-02-13T15:18:13.154925617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:13.155284 containerd[1444]: time="2025-02-13T15:18:13.155194065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:13.170568 systemd[1]: Started cri-containerd-94f5ddcbb8fb7e7cbd59afdb8a869b9d45eeec80d9e46e84ada5573e2a9e7bf5.scope - libcontainer container 94f5ddcbb8fb7e7cbd59afdb8a869b9d45eeec80d9e46e84ada5573e2a9e7bf5. Feb 13 15:18:13.173242 systemd[1]: Started cri-containerd-74fa6d1ff1c80c6f53b4bf25e3aa51497f5566e2a514fcdea5e7171821b2c547.scope - libcontainer container 74fa6d1ff1c80c6f53b4bf25e3aa51497f5566e2a514fcdea5e7171821b2c547. Feb 13 15:18:13.198163 containerd[1444]: time="2025-02-13T15:18:13.198114636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g45sb,Uid:b08b4d8b-2c45-4e99-adda-22add4af9784,Namespace:kube-system,Attempt:0,} returns sandbox id \"94f5ddcbb8fb7e7cbd59afdb8a869b9d45eeec80d9e46e84ada5573e2a9e7bf5\"" Feb 13 15:18:13.199108 kubelet[2555]: E0213 15:18:13.199082 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:13.207684 containerd[1444]: time="2025-02-13T15:18:13.207640472Z" level=info msg="CreateContainer within sandbox \"94f5ddcbb8fb7e7cbd59afdb8a869b9d45eeec80d9e46e84ada5573e2a9e7bf5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:18:13.216576 containerd[1444]: time="2025-02-13T15:18:13.216526232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-htwqc,Uid:d0a2ab71-96d2-49f9-a6e0-4fccd0981960,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"74fa6d1ff1c80c6f53b4bf25e3aa51497f5566e2a514fcdea5e7171821b2c547\"" Feb 13 15:18:13.219084 kubelet[2555]: E0213 15:18:13.219061 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:13.223178 containerd[1444]: time="2025-02-13T15:18:13.223128942Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 15:18:13.237665 containerd[1444]: time="2025-02-13T15:18:13.237609030Z" level=info msg="CreateContainer within sandbox \"94f5ddcbb8fb7e7cbd59afdb8a869b9d45eeec80d9e46e84ada5573e2a9e7bf5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c21fe83d7710a789197816c4ad776f8c0ce90ae84971818f7747d953e3ab2b68\"" Feb 13 15:18:13.238193 containerd[1444]: time="2025-02-13T15:18:13.238163490Z" level=info msg="StartContainer for \"c21fe83d7710a789197816c4ad776f8c0ce90ae84971818f7747d953e3ab2b68\"" Feb 13 15:18:13.266551 systemd[1]: Started cri-containerd-c21fe83d7710a789197816c4ad776f8c0ce90ae84971818f7747d953e3ab2b68.scope - libcontainer container c21fe83d7710a789197816c4ad776f8c0ce90ae84971818f7747d953e3ab2b68. Feb 13 15:18:13.294117 containerd[1444]: time="2025-02-13T15:18:13.292286078Z" level=info msg="StartContainer for \"c21fe83d7710a789197816c4ad776f8c0ce90ae84971818f7747d953e3ab2b68\" returns successfully" Feb 13 15:18:13.426269 kubelet[2555]: E0213 15:18:13.426164 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:13.436356 kubelet[2555]: I0213 15:18:13.436158 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-g45sb" podStartSLOduration=1.436115784 podStartE2EDuration="1.436115784s" podCreationTimestamp="2025-02-13 15:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:13.436069655 +0000 UTC m=+16.136976766" watchObservedRunningTime="2025-02-13 15:18:13.436115784 +0000 UTC m=+16.137022895" Feb 13 15:18:14.542621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount201236625.mount: Deactivated successfully. Feb 13 15:18:14.577491 containerd[1444]: time="2025-02-13T15:18:14.577031561Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:14.577873 containerd[1444]: time="2025-02-13T15:18:14.577501881Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 15:18:14.578430 containerd[1444]: time="2025-02-13T15:18:14.578350504Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:14.580770 containerd[1444]: time="2025-02-13T15:18:14.580694060Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:14.581752 containerd[1444]: time="2025-02-13T15:18:14.581690988Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.358515718s" Feb 13 15:18:14.581752 containerd[1444]: time="2025-02-13T15:18:14.581726314Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 15:18:14.586188 containerd[1444]: time="2025-02-13T15:18:14.584344276Z" level=info msg="CreateContainer within sandbox \"74fa6d1ff1c80c6f53b4bf25e3aa51497f5566e2a514fcdea5e7171821b2c547\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 15:18:14.599909 containerd[1444]: time="2025-02-13T15:18:14.599415261Z" level=info msg="CreateContainer within sandbox \"74fa6d1ff1c80c6f53b4bf25e3aa51497f5566e2a514fcdea5e7171821b2c547\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"21ee2f369885c6d58864a0ce4aa9b069ef4fbe3b046991f05c3750a86c326b57\"" Feb 13 15:18:14.601863 containerd[1444]: time="2025-02-13T15:18:14.600994928Z" level=info msg="StartContainer for \"21ee2f369885c6d58864a0ce4aa9b069ef4fbe3b046991f05c3750a86c326b57\"" Feb 13 15:18:14.625548 systemd[1]: Started cri-containerd-21ee2f369885c6d58864a0ce4aa9b069ef4fbe3b046991f05c3750a86c326b57.scope - libcontainer container 21ee2f369885c6d58864a0ce4aa9b069ef4fbe3b046991f05c3750a86c326b57. Feb 13 15:18:14.647918 systemd[1]: cri-containerd-21ee2f369885c6d58864a0ce4aa9b069ef4fbe3b046991f05c3750a86c326b57.scope: Deactivated successfully. Feb 13 15:18:14.648178 containerd[1444]: time="2025-02-13T15:18:14.648126446Z" level=info msg="StartContainer for \"21ee2f369885c6d58864a0ce4aa9b069ef4fbe3b046991f05c3750a86c326b57\" returns successfully" Feb 13 15:18:14.691937 containerd[1444]: time="2025-02-13T15:18:14.691874834Z" level=info msg="shim disconnected" id=21ee2f369885c6d58864a0ce4aa9b069ef4fbe3b046991f05c3750a86c326b57 namespace=k8s.io Feb 13 15:18:14.691937 containerd[1444]: time="2025-02-13T15:18:14.691932523Z" level=warning msg="cleaning up after shim disconnected" id=21ee2f369885c6d58864a0ce4aa9b069ef4fbe3b046991f05c3750a86c326b57 namespace=k8s.io Feb 13 15:18:14.691937 containerd[1444]: time="2025-02-13T15:18:14.691940965Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:15.429732 kubelet[2555]: E0213 15:18:15.429655 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:15.430263 containerd[1444]: time="2025-02-13T15:18:15.430215783Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 15:18:16.715756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2553152545.mount: Deactivated successfully. Feb 13 15:18:17.424375 containerd[1444]: time="2025-02-13T15:18:17.424306895Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:17.425979 containerd[1444]: time="2025-02-13T15:18:17.425934802Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Feb 13 15:18:17.427055 containerd[1444]: time="2025-02-13T15:18:17.427023553Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:17.430273 containerd[1444]: time="2025-02-13T15:18:17.430234760Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:17.433092 containerd[1444]: time="2025-02-13T15:18:17.433062634Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 2.002807445s" Feb 13 15:18:17.433092 containerd[1444]: time="2025-02-13T15:18:17.433093678Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 15:18:17.435651 containerd[1444]: time="2025-02-13T15:18:17.435622030Z" level=info msg="CreateContainer within sandbox \"74fa6d1ff1c80c6f53b4bf25e3aa51497f5566e2a514fcdea5e7171821b2c547\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:18:17.461907 containerd[1444]: time="2025-02-13T15:18:17.461848199Z" level=info msg="CreateContainer within sandbox \"74fa6d1ff1c80c6f53b4bf25e3aa51497f5566e2a514fcdea5e7171821b2c547\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8965e81a7748f333c7c84ced8cd80247670639c5744b6e62fbed1c910310c02a\"" Feb 13 15:18:17.462593 containerd[1444]: time="2025-02-13T15:18:17.462548176Z" level=info msg="StartContainer for \"8965e81a7748f333c7c84ced8cd80247670639c5744b6e62fbed1c910310c02a\"" Feb 13 15:18:17.496556 systemd[1]: Started cri-containerd-8965e81a7748f333c7c84ced8cd80247670639c5744b6e62fbed1c910310c02a.scope - libcontainer container 8965e81a7748f333c7c84ced8cd80247670639c5744b6e62fbed1c910310c02a. Feb 13 15:18:17.532045 containerd[1444]: time="2025-02-13T15:18:17.531983317Z" level=info msg="StartContainer for \"8965e81a7748f333c7c84ced8cd80247670639c5744b6e62fbed1c910310c02a\" returns successfully" Feb 13 15:18:17.537679 systemd[1]: cri-containerd-8965e81a7748f333c7c84ced8cd80247670639c5744b6e62fbed1c910310c02a.scope: Deactivated successfully. Feb 13 15:18:17.593012 kubelet[2555]: I0213 15:18:17.592142 2555 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:18:17.637754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8965e81a7748f333c7c84ced8cd80247670639c5744b6e62fbed1c910310c02a-rootfs.mount: Deactivated successfully. Feb 13 15:18:17.665944 kubelet[2555]: I0213 15:18:17.665898 2555 topology_manager.go:215] "Topology Admit Handler" podUID="5f213dde-b8cc-4b82-9be6-3c6bba835cb9" podNamespace="kube-system" podName="coredns-76f75df574-pldvs" Feb 13 15:18:17.666150 kubelet[2555]: I0213 15:18:17.666120 2555 topology_manager.go:215] "Topology Admit Handler" podUID="0efdccd4-3fbc-4439-aa0e-09fd9e2186df" podNamespace="kube-system" podName="coredns-76f75df574-sv8mh" Feb 13 15:18:17.679031 containerd[1444]: time="2025-02-13T15:18:17.678889958Z" level=info msg="shim disconnected" id=8965e81a7748f333c7c84ced8cd80247670639c5744b6e62fbed1c910310c02a namespace=k8s.io Feb 13 15:18:17.679031 containerd[1444]: time="2025-02-13T15:18:17.678953447Z" level=warning msg="cleaning up after shim disconnected" id=8965e81a7748f333c7c84ced8cd80247670639c5744b6e62fbed1c910310c02a namespace=k8s.io Feb 13 15:18:17.679031 containerd[1444]: time="2025-02-13T15:18:17.678961728Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:17.679440 systemd[1]: Created slice kubepods-burstable-pod0efdccd4_3fbc_4439_aa0e_09fd9e2186df.slice - libcontainer container kubepods-burstable-pod0efdccd4_3fbc_4439_aa0e_09fd9e2186df.slice. Feb 13 15:18:17.686249 systemd[1]: Created slice kubepods-burstable-pod5f213dde_b8cc_4b82_9be6_3c6bba835cb9.slice - libcontainer container kubepods-burstable-pod5f213dde_b8cc_4b82_9be6_3c6bba835cb9.slice. Feb 13 15:18:17.708647 kubelet[2555]: I0213 15:18:17.708601 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f213dde-b8cc-4b82-9be6-3c6bba835cb9-config-volume\") pod \"coredns-76f75df574-pldvs\" (UID: \"5f213dde-b8cc-4b82-9be6-3c6bba835cb9\") " pod="kube-system/coredns-76f75df574-pldvs" Feb 13 15:18:17.708647 kubelet[2555]: I0213 15:18:17.708650 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ll84\" (UniqueName: \"kubernetes.io/projected/5f213dde-b8cc-4b82-9be6-3c6bba835cb9-kube-api-access-2ll84\") pod \"coredns-76f75df574-pldvs\" (UID: \"5f213dde-b8cc-4b82-9be6-3c6bba835cb9\") " pod="kube-system/coredns-76f75df574-pldvs" Feb 13 15:18:17.708790 kubelet[2555]: I0213 15:18:17.708675 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkssv\" (UniqueName: \"kubernetes.io/projected/0efdccd4-3fbc-4439-aa0e-09fd9e2186df-kube-api-access-wkssv\") pod \"coredns-76f75df574-sv8mh\" (UID: \"0efdccd4-3fbc-4439-aa0e-09fd9e2186df\") " pod="kube-system/coredns-76f75df574-sv8mh" Feb 13 15:18:17.708790 kubelet[2555]: I0213 15:18:17.708708 2555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0efdccd4-3fbc-4439-aa0e-09fd9e2186df-config-volume\") pod \"coredns-76f75df574-sv8mh\" (UID: \"0efdccd4-3fbc-4439-aa0e-09fd9e2186df\") " pod="kube-system/coredns-76f75df574-sv8mh" Feb 13 15:18:17.985031 kubelet[2555]: E0213 15:18:17.984874 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:17.986026 containerd[1444]: time="2025-02-13T15:18:17.985623517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sv8mh,Uid:0efdccd4-3fbc-4439-aa0e-09fd9e2186df,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:17.988488 kubelet[2555]: E0213 15:18:17.988465 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:17.988973 containerd[1444]: time="2025-02-13T15:18:17.988937218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pldvs,Uid:5f213dde-b8cc-4b82-9be6-3c6bba835cb9,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:18.172383 containerd[1444]: time="2025-02-13T15:18:18.172284878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pldvs,Uid:5f213dde-b8cc-4b82-9be6-3c6bba835cb9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f88160db03163e1a9c634ca3e80a0d23c10431a6c812a9edb8575149932d7b2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:18:18.172622 kubelet[2555]: E0213 15:18:18.172582 2555 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f88160db03163e1a9c634ca3e80a0d23c10431a6c812a9edb8575149932d7b2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:18:18.172676 kubelet[2555]: E0213 15:18:18.172646 2555 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f88160db03163e1a9c634ca3e80a0d23c10431a6c812a9edb8575149932d7b2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-pldvs" Feb 13 15:18:18.172701 kubelet[2555]: E0213 15:18:18.172693 2555 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f88160db03163e1a9c634ca3e80a0d23c10431a6c812a9edb8575149932d7b2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-pldvs" Feb 13 15:18:18.172828 kubelet[2555]: E0213 15:18:18.172757 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-pldvs_kube-system(5f213dde-b8cc-4b82-9be6-3c6bba835cb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-pldvs_kube-system(5f213dde-b8cc-4b82-9be6-3c6bba835cb9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f88160db03163e1a9c634ca3e80a0d23c10431a6c812a9edb8575149932d7b2\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-pldvs" podUID="5f213dde-b8cc-4b82-9be6-3c6bba835cb9" Feb 13 15:18:18.173149 containerd[1444]: time="2025-02-13T15:18:18.173027855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sv8mh,Uid:0efdccd4-3fbc-4439-aa0e-09fd9e2186df,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"65591fcf585e2af866db1c1a5e0656d3fd354e8d50933ae972c20fbc8f8bd75c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:18:18.173340 kubelet[2555]: E0213 15:18:18.173225 2555 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65591fcf585e2af866db1c1a5e0656d3fd354e8d50933ae972c20fbc8f8bd75c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:18:18.174589 kubelet[2555]: E0213 15:18:18.174547 2555 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65591fcf585e2af866db1c1a5e0656d3fd354e8d50933ae972c20fbc8f8bd75c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-sv8mh" Feb 13 15:18:18.174678 kubelet[2555]: E0213 15:18:18.174656 2555 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65591fcf585e2af866db1c1a5e0656d3fd354e8d50933ae972c20fbc8f8bd75c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-sv8mh" Feb 13 15:18:18.174809 kubelet[2555]: E0213 15:18:18.174792 2555 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-sv8mh_kube-system(0efdccd4-3fbc-4439-aa0e-09fd9e2186df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-sv8mh_kube-system(0efdccd4-3fbc-4439-aa0e-09fd9e2186df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65591fcf585e2af866db1c1a5e0656d3fd354e8d50933ae972c20fbc8f8bd75c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-sv8mh" podUID="0efdccd4-3fbc-4439-aa0e-09fd9e2186df" Feb 13 15:18:18.436481 kubelet[2555]: E0213 15:18:18.435396 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:18.438763 containerd[1444]: time="2025-02-13T15:18:18.438724914Z" level=info msg="CreateContainer within sandbox \"74fa6d1ff1c80c6f53b4bf25e3aa51497f5566e2a514fcdea5e7171821b2c547\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 15:18:18.465170 containerd[1444]: time="2025-02-13T15:18:18.464598769Z" level=info msg="CreateContainer within sandbox \"74fa6d1ff1c80c6f53b4bf25e3aa51497f5566e2a514fcdea5e7171821b2c547\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"f521284facbce9ed547243bb57b5764c9428f64d8522415019dc23c631d41930\"" Feb 13 15:18:18.465398 containerd[1444]: time="2025-02-13T15:18:18.465353307Z" level=info msg="StartContainer for \"f521284facbce9ed547243bb57b5764c9428f64d8522415019dc23c631d41930\"" Feb 13 15:18:18.487540 systemd[1]: Started cri-containerd-f521284facbce9ed547243bb57b5764c9428f64d8522415019dc23c631d41930.scope - libcontainer container f521284facbce9ed547243bb57b5764c9428f64d8522415019dc23c631d41930. Feb 13 15:18:18.512626 containerd[1444]: time="2025-02-13T15:18:18.511416876Z" level=info msg="StartContainer for \"f521284facbce9ed547243bb57b5764c9428f64d8522415019dc23c631d41930\" returns successfully" Feb 13 15:18:18.640804 systemd[1]: run-netns-cni\x2d7c8f9eda\x2d8b74\x2ddfc7\x2d54d2\x2dca66757df5e5.mount: Deactivated successfully. Feb 13 15:18:18.640900 systemd[1]: run-netns-cni\x2de28a8513\x2d8114\x2d8344\x2d1bd1\x2d9a1e829ff9e6.mount: Deactivated successfully. Feb 13 15:18:18.640949 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f88160db03163e1a9c634ca3e80a0d23c10431a6c812a9edb8575149932d7b2-shm.mount: Deactivated successfully. Feb 13 15:18:18.641001 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-65591fcf585e2af866db1c1a5e0656d3fd354e8d50933ae972c20fbc8f8bd75c-shm.mount: Deactivated successfully. Feb 13 15:18:19.439697 kubelet[2555]: E0213 15:18:19.439232 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:19.657457 systemd-networkd[1390]: flannel.1: Link UP Feb 13 15:18:19.657465 systemd-networkd[1390]: flannel.1: Gained carrier Feb 13 15:18:20.441489 kubelet[2555]: E0213 15:18:20.441445 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:21.143504 systemd-networkd[1390]: flannel.1: Gained IPv6LL Feb 13 15:18:22.778699 systemd[1]: Started sshd@5-10.0.0.48:22-10.0.0.1:58226.service - OpenSSH per-connection server daemon (10.0.0.1:58226). Feb 13 15:18:22.821200 sshd[3208]: Accepted publickey for core from 10.0.0.1 port 58226 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:18:22.822663 sshd-session[3208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:22.828510 systemd-logind[1423]: New session 6 of user core. Feb 13 15:18:22.837577 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:18:22.960362 sshd[3210]: Connection closed by 10.0.0.1 port 58226 Feb 13 15:18:22.960874 sshd-session[3208]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:22.964675 systemd[1]: sshd@5-10.0.0.48:22-10.0.0.1:58226.service: Deactivated successfully. Feb 13 15:18:22.966268 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:18:22.967282 systemd-logind[1423]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:18:22.972043 systemd-logind[1423]: Removed session 6. Feb 13 15:18:27.973865 systemd[1]: Started sshd@6-10.0.0.48:22-10.0.0.1:58234.service - OpenSSH per-connection server daemon (10.0.0.1:58234). Feb 13 15:18:28.016213 sshd[3246]: Accepted publickey for core from 10.0.0.1 port 58234 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:18:28.017670 sshd-session[3246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:28.021990 systemd-logind[1423]: New session 7 of user core. Feb 13 15:18:28.032553 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:18:28.146516 sshd[3248]: Connection closed by 10.0.0.1 port 58234 Feb 13 15:18:28.148023 sshd-session[3246]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:28.151395 systemd[1]: sshd@6-10.0.0.48:22-10.0.0.1:58234.service: Deactivated successfully. Feb 13 15:18:28.153701 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:18:28.156023 systemd-logind[1423]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:18:28.156938 systemd-logind[1423]: Removed session 7. Feb 13 15:18:29.389207 kubelet[2555]: E0213 15:18:29.389175 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:29.390293 containerd[1444]: time="2025-02-13T15:18:29.389762587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sv8mh,Uid:0efdccd4-3fbc-4439-aa0e-09fd9e2186df,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:29.440481 systemd-networkd[1390]: cni0: Link UP Feb 13 15:18:29.440493 systemd-networkd[1390]: cni0: Gained carrier Feb 13 15:18:29.443305 systemd-networkd[1390]: cni0: Lost carrier Feb 13 15:18:29.447499 systemd-networkd[1390]: veth56a41973: Link UP Feb 13 15:18:29.450938 kernel: cni0: port 1(veth56a41973) entered blocking state Feb 13 15:18:29.451110 kernel: cni0: port 1(veth56a41973) entered disabled state Feb 13 15:18:29.451139 kernel: veth56a41973: entered allmulticast mode Feb 13 15:18:29.452394 kernel: veth56a41973: entered promiscuous mode Feb 13 15:18:29.454963 kernel: cni0: port 1(veth56a41973) entered blocking state Feb 13 15:18:29.455044 kernel: cni0: port 1(veth56a41973) entered forwarding state Feb 13 15:18:29.455064 kernel: cni0: port 1(veth56a41973) entered disabled state Feb 13 15:18:29.472665 kernel: cni0: port 1(veth56a41973) entered blocking state Feb 13 15:18:29.472867 kernel: cni0: port 1(veth56a41973) entered forwarding state Feb 13 15:18:29.473070 systemd-networkd[1390]: veth56a41973: Gained carrier Feb 13 15:18:29.474893 systemd-networkd[1390]: cni0: Gained carrier Feb 13 15:18:29.476605 containerd[1444]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000016938), "name":"cbr0", "type":"bridge"} Feb 13 15:18:29.476605 containerd[1444]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:18:29.504531 containerd[1444]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T15:18:29.504203888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:29.504531 containerd[1444]: time="2025-02-13T15:18:29.504275932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:29.504531 containerd[1444]: time="2025-02-13T15:18:29.504291893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:29.504717 containerd[1444]: time="2025-02-13T15:18:29.504540709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:29.528563 systemd[1]: Started cri-containerd-d9ccf6b981205fa48d70b92bd1e3f59e83fad37b621a020797004bbfb9bffd94.scope - libcontainer container d9ccf6b981205fa48d70b92bd1e3f59e83fad37b621a020797004bbfb9bffd94. Feb 13 15:18:29.540019 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:18:29.563839 containerd[1444]: time="2025-02-13T15:18:29.563788430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sv8mh,Uid:0efdccd4-3fbc-4439-aa0e-09fd9e2186df,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9ccf6b981205fa48d70b92bd1e3f59e83fad37b621a020797004bbfb9bffd94\"" Feb 13 15:18:29.564549 kubelet[2555]: E0213 15:18:29.564529 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:29.566990 containerd[1444]: time="2025-02-13T15:18:29.566881508Z" level=info msg="CreateContainer within sandbox \"d9ccf6b981205fa48d70b92bd1e3f59e83fad37b621a020797004bbfb9bffd94\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:18:29.579864 containerd[1444]: time="2025-02-13T15:18:29.579820098Z" level=info msg="CreateContainer within sandbox \"d9ccf6b981205fa48d70b92bd1e3f59e83fad37b621a020797004bbfb9bffd94\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2dc2cbb1234f237a0961a5360b539de5d20931536b616da2fce5a19b6a649439\"" Feb 13 15:18:29.580311 containerd[1444]: time="2025-02-13T15:18:29.580271407Z" level=info msg="StartContainer for \"2dc2cbb1234f237a0961a5360b539de5d20931536b616da2fce5a19b6a649439\"" Feb 13 15:18:29.608561 systemd[1]: Started cri-containerd-2dc2cbb1234f237a0961a5360b539de5d20931536b616da2fce5a19b6a649439.scope - libcontainer container 2dc2cbb1234f237a0961a5360b539de5d20931536b616da2fce5a19b6a649439. Feb 13 15:18:29.628529 containerd[1444]: time="2025-02-13T15:18:29.628491060Z" level=info msg="StartContainer for \"2dc2cbb1234f237a0961a5360b539de5d20931536b616da2fce5a19b6a649439\" returns successfully" Feb 13 15:18:30.460116 kubelet[2555]: E0213 15:18:30.459806 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:30.473832 kubelet[2555]: I0213 15:18:30.472555 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-htwqc" podStartSLOduration=14.261862029 podStartE2EDuration="18.472513402s" podCreationTimestamp="2025-02-13 15:18:12 +0000 UTC" firstStartedPulling="2025-02-13 15:18:13.222615769 +0000 UTC m=+15.923522840" lastFinishedPulling="2025-02-13 15:18:17.433267142 +0000 UTC m=+20.134174213" observedRunningTime="2025-02-13 15:18:19.455380907 +0000 UTC m=+22.156288018" watchObservedRunningTime="2025-02-13 15:18:30.472513402 +0000 UTC m=+33.173420513" Feb 13 15:18:30.476943 kubelet[2555]: I0213 15:18:30.474609 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-sv8mh" podStartSLOduration=18.474562118 podStartE2EDuration="18.474562118s" podCreationTimestamp="2025-02-13 15:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:30.472347433 +0000 UTC m=+33.173254584" watchObservedRunningTime="2025-02-13 15:18:30.474562118 +0000 UTC m=+33.175469229" Feb 13 15:18:30.999488 systemd-networkd[1390]: veth56a41973: Gained IPv6LL Feb 13 15:18:31.255500 systemd-networkd[1390]: cni0: Gained IPv6LL Feb 13 15:18:31.461170 kubelet[2555]: E0213 15:18:31.461112 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:32.395113 kubelet[2555]: E0213 15:18:32.395049 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:32.395875 containerd[1444]: time="2025-02-13T15:18:32.395539438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pldvs,Uid:5f213dde-b8cc-4b82-9be6-3c6bba835cb9,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:32.456936 systemd-networkd[1390]: vethe45c7817: Link UP Feb 13 15:18:32.458642 kernel: cni0: port 2(vethe45c7817) entered blocking state Feb 13 15:18:32.458696 kernel: cni0: port 2(vethe45c7817) entered disabled state Feb 13 15:18:32.458720 kernel: vethe45c7817: entered allmulticast mode Feb 13 15:18:32.462300 kernel: vethe45c7817: entered promiscuous mode Feb 13 15:18:32.467830 kernel: cni0: port 2(vethe45c7817) entered blocking state Feb 13 15:18:32.467896 kernel: cni0: port 2(vethe45c7817) entered forwarding state Feb 13 15:18:32.467849 systemd-networkd[1390]: vethe45c7817: Gained carrier Feb 13 15:18:32.470238 kubelet[2555]: E0213 15:18:32.470205 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:32.474846 containerd[1444]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000012938), "name":"cbr0", "type":"bridge"} Feb 13 15:18:32.474846 containerd[1444]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:18:32.494471 containerd[1444]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T15:18:32.493857458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:32.494471 containerd[1444]: time="2025-02-13T15:18:32.494433089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:32.494471 containerd[1444]: time="2025-02-13T15:18:32.494447369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:32.494699 containerd[1444]: time="2025-02-13T15:18:32.494543535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:32.522592 systemd[1]: Started cri-containerd-9351e4f97783aa508d67a824c042a6267f374d0b93261547fba5d7a74d87753e.scope - libcontainer container 9351e4f97783aa508d67a824c042a6267f374d0b93261547fba5d7a74d87753e. Feb 13 15:18:32.533355 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:18:32.555781 containerd[1444]: time="2025-02-13T15:18:32.555727728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pldvs,Uid:5f213dde-b8cc-4b82-9be6-3c6bba835cb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"9351e4f97783aa508d67a824c042a6267f374d0b93261547fba5d7a74d87753e\"" Feb 13 15:18:32.556453 kubelet[2555]: E0213 15:18:32.556433 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:32.558775 containerd[1444]: time="2025-02-13T15:18:32.558744409Z" level=info msg="CreateContainer within sandbox \"9351e4f97783aa508d67a824c042a6267f374d0b93261547fba5d7a74d87753e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:18:32.570921 containerd[1444]: time="2025-02-13T15:18:32.570882698Z" level=info msg="CreateContainer within sandbox \"9351e4f97783aa508d67a824c042a6267f374d0b93261547fba5d7a74d87753e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4494bc8686fd06134d24d285a039612b274e539610e4dd0aa9ea002b0d4bfed\"" Feb 13 15:18:32.571366 containerd[1444]: time="2025-02-13T15:18:32.571340363Z" level=info msg="StartContainer for \"d4494bc8686fd06134d24d285a039612b274e539610e4dd0aa9ea002b0d4bfed\"" Feb 13 15:18:32.598580 systemd[1]: Started cri-containerd-d4494bc8686fd06134d24d285a039612b274e539610e4dd0aa9ea002b0d4bfed.scope - libcontainer container d4494bc8686fd06134d24d285a039612b274e539610e4dd0aa9ea002b0d4bfed. Feb 13 15:18:32.622573 containerd[1444]: time="2025-02-13T15:18:32.622533782Z" level=info msg="StartContainer for \"d4494bc8686fd06134d24d285a039612b274e539610e4dd0aa9ea002b0d4bfed\" returns successfully" Feb 13 15:18:33.184702 systemd[1]: Started sshd@7-10.0.0.48:22-10.0.0.1:42500.service - OpenSSH per-connection server daemon (10.0.0.1:42500). Feb 13 15:18:33.234208 sshd[3512]: Accepted publickey for core from 10.0.0.1 port 42500 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:18:33.235720 sshd-session[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:33.240001 systemd-logind[1423]: New session 8 of user core. Feb 13 15:18:33.250580 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:18:33.370982 sshd[3514]: Connection closed by 10.0.0.1 port 42500 Feb 13 15:18:33.371469 sshd-session[3512]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:33.386048 systemd[1]: sshd@7-10.0.0.48:22-10.0.0.1:42500.service: Deactivated successfully. Feb 13 15:18:33.387780 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:18:33.388670 systemd-logind[1423]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:18:33.402782 systemd[1]: Started sshd@8-10.0.0.48:22-10.0.0.1:42508.service - OpenSSH per-connection server daemon (10.0.0.1:42508). Feb 13 15:18:33.403858 systemd-logind[1423]: Removed session 8. Feb 13 15:18:33.441211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount731001388.mount: Deactivated successfully. Feb 13 15:18:33.447753 sshd[3528]: Accepted publickey for core from 10.0.0.1 port 42508 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:18:33.449064 sshd-session[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:33.454460 systemd-logind[1423]: New session 9 of user core. Feb 13 15:18:33.461539 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:18:33.473407 kubelet[2555]: E0213 15:18:33.473346 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:33.508847 kubelet[2555]: I0213 15:18:33.508792 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-pldvs" podStartSLOduration=21.508748083 podStartE2EDuration="21.508748083s" podCreationTimestamp="2025-02-13 15:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:33.49100312 +0000 UTC m=+36.191910231" watchObservedRunningTime="2025-02-13 15:18:33.508748083 +0000 UTC m=+36.209655154" Feb 13 15:18:33.621306 sshd[3530]: Connection closed by 10.0.0.1 port 42508 Feb 13 15:18:33.621742 sshd-session[3528]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:33.637515 systemd[1]: sshd@8-10.0.0.48:22-10.0.0.1:42508.service: Deactivated successfully. Feb 13 15:18:33.639234 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:18:33.642679 systemd-logind[1423]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:18:33.651698 systemd[1]: Started sshd@9-10.0.0.48:22-10.0.0.1:42516.service - OpenSSH per-connection server daemon (10.0.0.1:42516). Feb 13 15:18:33.655590 systemd-logind[1423]: Removed session 9. Feb 13 15:18:33.694488 sshd[3545]: Accepted publickey for core from 10.0.0.1 port 42516 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:18:33.695655 sshd-session[3545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:33.700879 systemd-logind[1423]: New session 10 of user core. Feb 13 15:18:33.715632 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:18:33.844134 sshd[3547]: Connection closed by 10.0.0.1 port 42516 Feb 13 15:18:33.844661 sshd-session[3545]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:33.847433 systemd[1]: sshd@9-10.0.0.48:22-10.0.0.1:42516.service: Deactivated successfully. Feb 13 15:18:33.849162 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:18:33.853031 systemd-logind[1423]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:18:33.853930 systemd-logind[1423]: Removed session 10. Feb 13 15:18:33.943515 systemd-networkd[1390]: vethe45c7817: Gained IPv6LL Feb 13 15:18:34.475666 kubelet[2555]: E0213 15:18:34.475622 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:35.477204 kubelet[2555]: E0213 15:18:35.477160 2555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:38.863152 systemd[1]: Started sshd@10-10.0.0.48:22-10.0.0.1:42524.service - OpenSSH per-connection server daemon (10.0.0.1:42524). Feb 13 15:18:38.921829 sshd[3584]: Accepted publickey for core from 10.0.0.1 port 42524 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:18:38.922968 sshd-session[3584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:38.931435 systemd-logind[1423]: New session 11 of user core. Feb 13 15:18:38.943702 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:18:39.089090 sshd[3586]: Connection closed by 10.0.0.1 port 42524 Feb 13 15:18:39.089729 sshd-session[3584]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:39.099189 systemd[1]: sshd@10-10.0.0.48:22-10.0.0.1:42524.service: Deactivated successfully. Feb 13 15:18:39.101233 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:18:39.103063 systemd-logind[1423]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:18:39.104742 systemd[1]: Started sshd@11-10.0.0.48:22-10.0.0.1:42526.service - OpenSSH per-connection server daemon (10.0.0.1:42526). Feb 13 15:18:39.109445 systemd-logind[1423]: Removed session 11. Feb 13 15:18:39.169249 sshd[3598]: Accepted publickey for core from 10.0.0.1 port 42526 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:18:39.170889 sshd-session[3598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:39.175818 systemd-logind[1423]: New session 12 of user core. Feb 13 15:18:39.181554 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:18:39.397532 sshd[3600]: Connection closed by 10.0.0.1 port 42526 Feb 13 15:18:39.398007 sshd-session[3598]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:39.413246 systemd[1]: sshd@11-10.0.0.48:22-10.0.0.1:42526.service: Deactivated successfully. Feb 13 15:18:39.416178 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:18:39.418182 systemd-logind[1423]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:18:39.425726 systemd[1]: Started sshd@12-10.0.0.48:22-10.0.0.1:42540.service - OpenSSH per-connection server daemon (10.0.0.1:42540). Feb 13 15:18:39.427783 systemd-logind[1423]: Removed session 12. Feb 13 15:18:39.472313 sshd[3611]: Accepted publickey for core from 10.0.0.1 port 42540 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:18:39.473756 sshd-session[3611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:39.479306 systemd-logind[1423]: New session 13 of user core. Feb 13 15:18:39.484565 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:18:40.673744 sshd[3613]: Connection closed by 10.0.0.1 port 42540 Feb 13 15:18:40.673558 sshd-session[3611]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:40.687718 systemd[1]: sshd@12-10.0.0.48:22-10.0.0.1:42540.service: Deactivated successfully. Feb 13 15:18:40.691296 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:18:40.694828 systemd-logind[1423]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:18:40.706718 systemd[1]: Started sshd@13-10.0.0.48:22-10.0.0.1:42556.service - OpenSSH per-connection server daemon (10.0.0.1:42556). Feb 13 15:18:40.707880 systemd-logind[1423]: Removed session 13. Feb 13 15:18:40.749461 sshd[3657]: Accepted publickey for core from 10.0.0.1 port 42556 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:18:40.750291 sshd-session[3657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:40.754807 systemd-logind[1423]: New session 14 of user core. Feb 13 15:18:40.767607 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:18:40.989295 sshd[3659]: Connection closed by 10.0.0.1 port 42556 Feb 13 15:18:40.990715 sshd-session[3657]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:41.000112 systemd[1]: sshd@13-10.0.0.48:22-10.0.0.1:42556.service: Deactivated successfully. Feb 13 15:18:41.002306 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:18:41.004228 systemd-logind[1423]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:18:41.006656 systemd[1]: Started sshd@14-10.0.0.48:22-10.0.0.1:42566.service - OpenSSH per-connection server daemon (10.0.0.1:42566). Feb 13 15:18:41.008975 systemd-logind[1423]: Removed session 14. Feb 13 15:18:41.056667 sshd[3669]: Accepted publickey for core from 10.0.0.1 port 42566 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:18:41.058401 sshd-session[3669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:41.063127 systemd-logind[1423]: New session 15 of user core. Feb 13 15:18:41.073586 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:18:41.185094 sshd[3671]: Connection closed by 10.0.0.1 port 42566 Feb 13 15:18:41.187516 sshd-session[3669]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:41.190086 systemd[1]: sshd@14-10.0.0.48:22-10.0.0.1:42566.service: Deactivated successfully. Feb 13 15:18:41.191792 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:18:41.193184 systemd-logind[1423]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:18:41.194251 systemd-logind[1423]: Removed session 15. Feb 13 15:18:46.198319 systemd[1]: Started sshd@15-10.0.0.48:22-10.0.0.1:42750.service - OpenSSH per-connection server daemon (10.0.0.1:42750). Feb 13 15:18:46.248552 sshd[3709]: Accepted publickey for core from 10.0.0.1 port 42750 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:18:46.250031 sshd-session[3709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:46.253577 systemd-logind[1423]: New session 16 of user core. Feb 13 15:18:46.269577 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:18:46.380340 sshd[3711]: Connection closed by 10.0.0.1 port 42750 Feb 13 15:18:46.380708 sshd-session[3709]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:46.384792 systemd[1]: sshd@15-10.0.0.48:22-10.0.0.1:42750.service: Deactivated successfully. Feb 13 15:18:46.386669 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:18:46.388197 systemd-logind[1423]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:18:46.389128 systemd-logind[1423]: Removed session 16. Feb 13 15:18:51.395009 systemd[1]: Started sshd@16-10.0.0.48:22-10.0.0.1:42764.service - OpenSSH per-connection server daemon (10.0.0.1:42764). Feb 13 15:18:51.449044 sshd[3745]: Accepted publickey for core from 10.0.0.1 port 42764 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:18:51.449572 sshd-session[3745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:51.454983 systemd-logind[1423]: New session 17 of user core. Feb 13 15:18:51.467571 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:18:51.590930 sshd[3747]: Connection closed by 10.0.0.1 port 42764 Feb 13 15:18:51.590096 sshd-session[3745]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:51.592852 systemd[1]: sshd@16-10.0.0.48:22-10.0.0.1:42764.service: Deactivated successfully. Feb 13 15:18:51.594301 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:18:51.596001 systemd-logind[1423]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:18:51.597209 systemd-logind[1423]: Removed session 17. Feb 13 15:18:56.601448 systemd[1]: Started sshd@17-10.0.0.48:22-10.0.0.1:50116.service - OpenSSH per-connection server daemon (10.0.0.1:50116). Feb 13 15:18:56.649440 sshd[3781]: Accepted publickey for core from 10.0.0.1 port 50116 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:18:56.650044 sshd-session[3781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:56.655633 systemd-logind[1423]: New session 18 of user core. Feb 13 15:18:56.664615 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:18:56.791699 sshd[3783]: Connection closed by 10.0.0.1 port 50116 Feb 13 15:18:56.791977 sshd-session[3781]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:56.795539 systemd[1]: sshd@17-10.0.0.48:22-10.0.0.1:50116.service: Deactivated successfully. Feb 13 15:18:56.797767 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:18:56.798760 systemd-logind[1423]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:18:56.799769 systemd-logind[1423]: Removed session 18.