Feb 14 09:03:00.973863 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 14 09:03:00.973884 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 14 09:03:00.973894 kernel: KASLR enabled Feb 14 09:03:00.973899 kernel: efi: EFI v2.7 by EDK II Feb 14 09:03:00.973905 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 14 09:03:00.973911 kernel: random: crng init done Feb 14 09:03:00.973918 kernel: ACPI: Early table checksum verification disabled Feb 14 09:03:00.973924 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 14 09:03:00.973930 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 14 09:03:00.973937 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 09:03:00.973943 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 09:03:00.973949 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 09:03:00.973955 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 09:03:00.973961 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 09:03:00.973968 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 09:03:00.973976 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 09:03:00.973983 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 09:03:00.973989 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 14 09:03:00.973995 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 14 09:03:00.974002 kernel: NUMA: Failed to initialise from firmware Feb 14 09:03:00.974008 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 14 09:03:00.974014 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 14 09:03:00.974021 kernel: Zone ranges: Feb 14 09:03:00.974027 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 14 09:03:00.974033 kernel: DMA32 empty Feb 14 09:03:00.974041 kernel: Normal empty Feb 14 09:03:00.974047 kernel: Movable zone start for each node Feb 14 09:03:00.974053 kernel: Early memory node ranges Feb 14 09:03:00.974060 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 14 09:03:00.974066 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 14 09:03:00.974073 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 14 09:03:00.974079 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 14 09:03:00.974085 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 14 09:03:00.974092 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 14 09:03:00.974098 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 14 09:03:00.974104 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 14 09:03:00.974111 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 14 09:03:00.974119 kernel: psci: probing for conduit method from ACPI. Feb 14 09:03:00.974125 kernel: psci: PSCIv1.1 detected in firmware. Feb 14 09:03:00.974132 kernel: psci: Using standard PSCI v0.2 function IDs Feb 14 09:03:00.974141 kernel: psci: Trusted OS migration not required Feb 14 09:03:00.974147 kernel: psci: SMC Calling Convention v1.1 Feb 14 09:03:00.974154 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 14 09:03:00.974162 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 14 09:03:00.974169 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 14 09:03:00.974176 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 14 09:03:00.974183 kernel: Detected PIPT I-cache on CPU0 Feb 14 09:03:00.974190 kernel: CPU features: detected: GIC system register CPU interface Feb 14 09:03:00.974196 kernel: CPU features: detected: Hardware dirty bit management Feb 14 09:03:00.974203 kernel: CPU features: detected: Spectre-v4 Feb 14 09:03:00.974210 kernel: CPU features: detected: Spectre-BHB Feb 14 09:03:00.974217 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 14 09:03:00.974224 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 14 09:03:00.974231 kernel: CPU features: detected: ARM erratum 1418040 Feb 14 09:03:00.974238 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 14 09:03:00.974245 kernel: alternatives: applying boot alternatives Feb 14 09:03:00.974252 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 14 09:03:00.974260 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 14 09:03:00.974266 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 14 09:03:00.974273 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 14 09:03:00.974280 kernel: Fallback order for Node 0: 0 Feb 14 09:03:00.974287 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 14 09:03:00.974293 kernel: Policy zone: DMA Feb 14 09:03:00.974300 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 14 09:03:00.974308 kernel: software IO TLB: area num 4. Feb 14 09:03:00.974315 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 14 09:03:00.974322 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Feb 14 09:03:00.974329 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 14 09:03:00.974335 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 14 09:03:00.974343 kernel: rcu: RCU event tracing is enabled. Feb 14 09:03:00.974350 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 14 09:03:00.974357 kernel: Trampoline variant of Tasks RCU enabled. Feb 14 09:03:00.974363 kernel: Tracing variant of Tasks RCU enabled. Feb 14 09:03:00.974371 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 14 09:03:00.974377 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 14 09:03:00.974384 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 14 09:03:00.974392 kernel: GICv3: 256 SPIs implemented Feb 14 09:03:00.974399 kernel: GICv3: 0 Extended SPIs implemented Feb 14 09:03:00.974406 kernel: Root IRQ handler: gic_handle_irq Feb 14 09:03:00.974412 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 14 09:03:00.974422 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 14 09:03:00.974429 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 14 09:03:00.974436 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 14 09:03:00.974443 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 14 09:03:00.974450 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 14 09:03:00.974457 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 14 09:03:00.974463 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 14 09:03:00.974471 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 09:03:00.974478 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 14 09:03:00.974485 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 14 09:03:00.974493 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 14 09:03:00.974499 kernel: arm-pv: using stolen time PV Feb 14 09:03:00.974506 kernel: Console: colour dummy device 80x25 Feb 14 09:03:00.974513 kernel: ACPI: Core revision 20230628 Feb 14 09:03:00.974521 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 14 09:03:00.974528 kernel: pid_max: default: 32768 minimum: 301 Feb 14 09:03:00.974535 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 14 09:03:00.974543 kernel: landlock: Up and running. Feb 14 09:03:00.974550 kernel: SELinux: Initializing. Feb 14 09:03:00.974557 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 14 09:03:00.974564 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 14 09:03:00.974571 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 14 09:03:00.974578 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 14 09:03:00.974657 kernel: rcu: Hierarchical SRCU implementation. Feb 14 09:03:00.974667 kernel: rcu: Max phase no-delay instances is 400. Feb 14 09:03:00.974675 kernel: Platform MSI: ITS@0x8080000 domain created Feb 14 09:03:00.974684 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 14 09:03:00.974691 kernel: Remapping and enabling EFI services. Feb 14 09:03:00.974698 kernel: smp: Bringing up secondary CPUs ... Feb 14 09:03:00.974705 kernel: Detected PIPT I-cache on CPU1 Feb 14 09:03:00.974712 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 14 09:03:00.974719 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 14 09:03:00.974726 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 09:03:00.974733 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 14 09:03:00.974740 kernel: Detected PIPT I-cache on CPU2 Feb 14 09:03:00.974747 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 14 09:03:00.974756 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 14 09:03:00.974763 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 09:03:00.974774 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 14 09:03:00.974783 kernel: Detected PIPT I-cache on CPU3 Feb 14 09:03:00.974790 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 14 09:03:00.974798 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 14 09:03:00.974805 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 09:03:00.974812 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 14 09:03:00.974819 kernel: smp: Brought up 1 node, 4 CPUs Feb 14 09:03:00.974828 kernel: SMP: Total of 4 processors activated. Feb 14 09:03:00.974835 kernel: CPU features: detected: 32-bit EL0 Support Feb 14 09:03:00.974843 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 14 09:03:00.974850 kernel: CPU features: detected: Common not Private translations Feb 14 09:03:00.974857 kernel: CPU features: detected: CRC32 instructions Feb 14 09:03:00.974864 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 14 09:03:00.974871 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 14 09:03:00.974879 kernel: CPU features: detected: LSE atomic instructions Feb 14 09:03:00.974887 kernel: CPU features: detected: Privileged Access Never Feb 14 09:03:00.974894 kernel: CPU features: detected: RAS Extension Support Feb 14 09:03:00.974902 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 14 09:03:00.974909 kernel: CPU: All CPU(s) started at EL1 Feb 14 09:03:00.974917 kernel: alternatives: applying system-wide alternatives Feb 14 09:03:00.974924 kernel: devtmpfs: initialized Feb 14 09:03:00.974931 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 14 09:03:00.974939 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 14 09:03:00.974947 kernel: pinctrl core: initialized pinctrl subsystem Feb 14 09:03:00.974955 kernel: SMBIOS 3.0.0 present. Feb 14 09:03:00.974963 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 14 09:03:00.974971 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 14 09:03:00.974978 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 14 09:03:00.974985 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 14 09:03:00.974993 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 14 09:03:00.975000 kernel: audit: initializing netlink subsys (disabled) Feb 14 09:03:00.975008 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Feb 14 09:03:00.975015 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 14 09:03:00.975024 kernel: cpuidle: using governor menu Feb 14 09:03:00.975031 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 14 09:03:00.975039 kernel: ASID allocator initialised with 32768 entries Feb 14 09:03:00.975046 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 14 09:03:00.975054 kernel: Serial: AMBA PL011 UART driver Feb 14 09:03:00.975061 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 14 09:03:00.975068 kernel: Modules: 0 pages in range for non-PLT usage Feb 14 09:03:00.975076 kernel: Modules: 509040 pages in range for PLT usage Feb 14 09:03:00.975083 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 14 09:03:00.975091 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 14 09:03:00.975099 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 14 09:03:00.975107 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 14 09:03:00.975114 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 14 09:03:00.975121 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 14 09:03:00.975128 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 14 09:03:00.975136 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 14 09:03:00.975143 kernel: ACPI: Added _OSI(Module Device) Feb 14 09:03:00.975150 kernel: ACPI: Added _OSI(Processor Device) Feb 14 09:03:00.975159 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 14 09:03:00.975166 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 14 09:03:00.975173 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 14 09:03:00.975181 kernel: ACPI: Interpreter enabled Feb 14 09:03:00.975188 kernel: ACPI: Using GIC for interrupt routing Feb 14 09:03:00.975195 kernel: ACPI: MCFG table detected, 1 entries Feb 14 09:03:00.975203 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 14 09:03:00.975210 kernel: printk: console [ttyAMA0] enabled Feb 14 09:03:00.975217 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 14 09:03:00.975374 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 09:03:00.975451 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 14 09:03:00.975518 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 14 09:03:00.975591 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 14 09:03:00.975695 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 14 09:03:00.975705 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 14 09:03:00.975713 kernel: PCI host bridge to bus 0000:00 Feb 14 09:03:00.975792 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 14 09:03:00.975858 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 14 09:03:00.975917 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 14 09:03:00.975979 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 14 09:03:00.976071 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 14 09:03:00.976149 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 14 09:03:00.976224 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 14 09:03:00.976292 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 14 09:03:00.976359 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 14 09:03:00.976427 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 14 09:03:00.976493 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 14 09:03:00.976561 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 14 09:03:00.976641 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 14 09:03:00.976707 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 14 09:03:00.976768 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 14 09:03:00.976778 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 14 09:03:00.976786 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 14 09:03:00.976793 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 14 09:03:00.976801 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 14 09:03:00.976808 kernel: iommu: Default domain type: Translated Feb 14 09:03:00.976816 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 14 09:03:00.976823 kernel: efivars: Registered efivars operations Feb 14 09:03:00.976832 kernel: vgaarb: loaded Feb 14 09:03:00.976839 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 14 09:03:00.976847 kernel: VFS: Disk quotas dquot_6.6.0 Feb 14 09:03:00.976854 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 14 09:03:00.976861 kernel: pnp: PnP ACPI init Feb 14 09:03:00.976940 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 14 09:03:00.976951 kernel: pnp: PnP ACPI: found 1 devices Feb 14 09:03:00.976959 kernel: NET: Registered PF_INET protocol family Feb 14 09:03:00.976968 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 14 09:03:00.976976 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 14 09:03:00.976984 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 14 09:03:00.976991 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 14 09:03:00.976999 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 14 09:03:00.977006 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 14 09:03:00.977014 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 14 09:03:00.977021 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 14 09:03:00.977028 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 14 09:03:00.977037 kernel: PCI: CLS 0 bytes, default 64 Feb 14 09:03:00.977044 kernel: kvm [1]: HYP mode not available Feb 14 09:03:00.977052 kernel: Initialise system trusted keyrings Feb 14 09:03:00.977059 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 14 09:03:00.977066 kernel: Key type asymmetric registered Feb 14 09:03:00.977074 kernel: Asymmetric key parser 'x509' registered Feb 14 09:03:00.977081 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 14 09:03:00.977088 kernel: io scheduler mq-deadline registered Feb 14 09:03:00.977095 kernel: io scheduler kyber registered Feb 14 09:03:00.977104 kernel: io scheduler bfq registered Feb 14 09:03:00.977112 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 14 09:03:00.977119 kernel: ACPI: button: Power Button [PWRB] Feb 14 09:03:00.977127 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 14 09:03:00.977198 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 14 09:03:00.977208 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 14 09:03:00.977216 kernel: thunder_xcv, ver 1.0 Feb 14 09:03:00.977223 kernel: thunder_bgx, ver 1.0 Feb 14 09:03:00.977231 kernel: nicpf, ver 1.0 Feb 14 09:03:00.977253 kernel: nicvf, ver 1.0 Feb 14 09:03:00.977333 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 14 09:03:00.977402 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-14T09:03:00 UTC (1739523780) Feb 14 09:03:00.977412 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 14 09:03:00.977420 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 14 09:03:00.977430 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 14 09:03:00.977446 kernel: watchdog: Hard watchdog permanently disabled Feb 14 09:03:00.977453 kernel: NET: Registered PF_INET6 protocol family Feb 14 09:03:00.977463 kernel: Segment Routing with IPv6 Feb 14 09:03:00.977470 kernel: In-situ OAM (IOAM) with IPv6 Feb 14 09:03:00.977478 kernel: NET: Registered PF_PACKET protocol family Feb 14 09:03:00.977485 kernel: Key type dns_resolver registered Feb 14 09:03:00.977492 kernel: registered taskstats version 1 Feb 14 09:03:00.977500 kernel: Loading compiled-in X.509 certificates Feb 14 09:03:00.977508 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 14 09:03:00.977515 kernel: Key type .fscrypt registered Feb 14 09:03:00.977523 kernel: Key type fscrypt-provisioning registered Feb 14 09:03:00.977532 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 14 09:03:00.977540 kernel: ima: Allocated hash algorithm: sha1 Feb 14 09:03:00.977547 kernel: ima: No architecture policies found Feb 14 09:03:00.977555 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 14 09:03:00.977562 kernel: clk: Disabling unused clocks Feb 14 09:03:00.977569 kernel: Freeing unused kernel memory: 39360K Feb 14 09:03:00.977577 kernel: Run /init as init process Feb 14 09:03:00.977606 kernel: with arguments: Feb 14 09:03:00.977617 kernel: /init Feb 14 09:03:00.977628 kernel: with environment: Feb 14 09:03:00.977635 kernel: HOME=/ Feb 14 09:03:00.977643 kernel: TERM=linux Feb 14 09:03:00.977650 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 14 09:03:00.977659 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 14 09:03:00.977668 systemd[1]: Detected virtualization kvm. Feb 14 09:03:00.977677 systemd[1]: Detected architecture arm64. Feb 14 09:03:00.977684 systemd[1]: Running in initrd. Feb 14 09:03:00.977694 systemd[1]: No hostname configured, using default hostname. Feb 14 09:03:00.977702 systemd[1]: Hostname set to . Feb 14 09:03:00.977710 systemd[1]: Initializing machine ID from VM UUID. Feb 14 09:03:00.977718 systemd[1]: Queued start job for default target initrd.target. Feb 14 09:03:00.977726 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 09:03:00.977734 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 09:03:00.977742 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 14 09:03:00.977751 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 14 09:03:00.977760 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 14 09:03:00.977769 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 14 09:03:00.977779 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 14 09:03:00.977787 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 14 09:03:00.977803 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 09:03:00.977817 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 14 09:03:00.977828 systemd[1]: Reached target paths.target - Path Units. Feb 14 09:03:00.977836 systemd[1]: Reached target slices.target - Slice Units. Feb 14 09:03:00.977844 systemd[1]: Reached target swap.target - Swaps. Feb 14 09:03:00.977852 systemd[1]: Reached target timers.target - Timer Units. Feb 14 09:03:00.977860 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 14 09:03:00.977868 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 14 09:03:00.977877 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 14 09:03:00.977885 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 14 09:03:00.977893 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 14 09:03:00.977903 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 14 09:03:00.977911 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 09:03:00.977920 systemd[1]: Reached target sockets.target - Socket Units. Feb 14 09:03:00.977928 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 14 09:03:00.977936 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 14 09:03:00.977944 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 14 09:03:00.977952 systemd[1]: Starting systemd-fsck-usr.service... Feb 14 09:03:00.977960 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 14 09:03:00.977969 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 14 09:03:00.977978 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 09:03:00.977986 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 14 09:03:00.977994 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 09:03:00.978003 systemd[1]: Finished systemd-fsck-usr.service. Feb 14 09:03:00.978011 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 14 09:03:00.978041 systemd-journald[237]: Collecting audit messages is disabled. Feb 14 09:03:00.978066 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 09:03:00.978075 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 09:03:00.978086 systemd-journald[237]: Journal started Feb 14 09:03:00.978105 systemd-journald[237]: Runtime Journal (/run/log/journal/7e91918f010a47568b7d3abc44fce8c1) is 5.9M, max 47.3M, 41.4M free. Feb 14 09:03:00.961851 systemd-modules-load[238]: Inserted module 'overlay' Feb 14 09:03:00.979659 systemd[1]: Started systemd-journald.service - Journal Service. Feb 14 09:03:00.980745 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 09:03:00.984028 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 14 09:03:00.986337 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 14 09:03:00.989281 kernel: Bridge firewalling registered Feb 14 09:03:00.986565 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 14 09:03:00.986923 systemd-modules-load[238]: Inserted module 'br_netfilter' Feb 14 09:03:00.987941 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 14 09:03:00.991763 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 14 09:03:00.996250 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 09:03:01.000250 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 09:03:01.003925 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 14 09:03:01.005347 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 09:03:01.012742 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 14 09:03:01.014675 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 14 09:03:01.024335 dracut-cmdline[277]: dracut-dracut-053 Feb 14 09:03:01.026858 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 14 09:03:01.045443 systemd-resolved[278]: Positive Trust Anchors: Feb 14 09:03:01.045464 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 14 09:03:01.045496 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 14 09:03:01.050356 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 14 09:03:01.051475 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 14 09:03:01.053185 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 14 09:03:01.098618 kernel: SCSI subsystem initialized Feb 14 09:03:01.103610 kernel: Loading iSCSI transport class v2.0-870. Feb 14 09:03:01.113610 kernel: iscsi: registered transport (tcp) Feb 14 09:03:01.123871 kernel: iscsi: registered transport (qla4xxx) Feb 14 09:03:01.123905 kernel: QLogic iSCSI HBA Driver Feb 14 09:03:01.167797 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 14 09:03:01.182739 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 14 09:03:01.199068 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 14 09:03:01.199117 kernel: device-mapper: uevent: version 1.0.3 Feb 14 09:03:01.200354 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 14 09:03:01.245622 kernel: raid6: neonx8 gen() 15779 MB/s Feb 14 09:03:01.262609 kernel: raid6: neonx4 gen() 15648 MB/s Feb 14 09:03:01.279612 kernel: raid6: neonx2 gen() 13187 MB/s Feb 14 09:03:01.296618 kernel: raid6: neonx1 gen() 10497 MB/s Feb 14 09:03:01.313618 kernel: raid6: int64x8 gen() 6965 MB/s Feb 14 09:03:01.330618 kernel: raid6: int64x4 gen() 7346 MB/s Feb 14 09:03:01.347606 kernel: raid6: int64x2 gen() 6133 MB/s Feb 14 09:03:01.364612 kernel: raid6: int64x1 gen() 5055 MB/s Feb 14 09:03:01.364635 kernel: raid6: using algorithm neonx8 gen() 15779 MB/s Feb 14 09:03:01.381629 kernel: raid6: .... xor() 11920 MB/s, rmw enabled Feb 14 09:03:01.381657 kernel: raid6: using neon recovery algorithm Feb 14 09:03:01.386612 kernel: xor: measuring software checksum speed Feb 14 09:03:01.386628 kernel: 8regs : 19778 MB/sec Feb 14 09:03:01.388091 kernel: 32regs : 18086 MB/sec Feb 14 09:03:01.388104 kernel: arm64_neon : 27070 MB/sec Feb 14 09:03:01.388113 kernel: xor: using function: arm64_neon (27070 MB/sec) Feb 14 09:03:01.439616 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 14 09:03:01.450216 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 14 09:03:01.459760 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 09:03:01.471140 systemd-udevd[460]: Using default interface naming scheme 'v255'. Feb 14 09:03:01.474306 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 09:03:01.482170 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 14 09:03:01.496022 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Feb 14 09:03:01.524162 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 14 09:03:01.532786 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 14 09:03:01.576799 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 09:03:01.584763 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 14 09:03:01.598432 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 14 09:03:01.601819 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 14 09:03:01.603636 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 09:03:01.604604 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 14 09:03:01.612750 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 14 09:03:01.621418 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 14 09:03:01.628025 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 14 09:03:01.636697 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 14 09:03:01.636800 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 14 09:03:01.636811 kernel: GPT:9289727 != 19775487 Feb 14 09:03:01.636821 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 14 09:03:01.636830 kernel: GPT:9289727 != 19775487 Feb 14 09:03:01.636839 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 14 09:03:01.636856 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 09:03:01.639197 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 14 09:03:01.639310 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 09:03:01.641891 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 09:03:01.643792 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 09:03:01.643933 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 09:03:01.647638 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 09:03:01.652827 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 09:03:01.655936 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (522) Feb 14 09:03:01.658620 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (508) Feb 14 09:03:01.664451 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 14 09:03:01.666680 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 09:03:01.678261 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 14 09:03:01.683558 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 14 09:03:01.687309 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 14 09:03:01.688336 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 14 09:03:01.699820 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 14 09:03:01.701661 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 09:03:01.707840 disk-uuid[554]: Primary Header is updated. Feb 14 09:03:01.707840 disk-uuid[554]: Secondary Entries is updated. Feb 14 09:03:01.707840 disk-uuid[554]: Secondary Header is updated. Feb 14 09:03:01.711615 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 09:03:01.727372 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 09:03:02.734484 disk-uuid[556]: The operation has completed successfully. Feb 14 09:03:02.735494 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 14 09:03:02.755284 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 14 09:03:02.755651 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 14 09:03:02.779877 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 14 09:03:02.784822 sh[575]: Success Feb 14 09:03:02.802631 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 14 09:03:02.839027 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 14 09:03:02.840531 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 14 09:03:02.841290 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 14 09:03:02.852272 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 14 09:03:02.852309 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 14 09:03:02.852320 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 14 09:03:02.852331 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 14 09:03:02.852944 kernel: BTRFS info (device dm-0): using free space tree Feb 14 09:03:02.856811 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 14 09:03:02.857990 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 14 09:03:02.864759 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 14 09:03:02.866170 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 14 09:03:02.874201 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 09:03:02.874255 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 14 09:03:02.874268 kernel: BTRFS info (device vda6): using free space tree Feb 14 09:03:02.876176 kernel: BTRFS info (device vda6): auto enabling async discard Feb 14 09:03:02.884213 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 14 09:03:02.885637 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 09:03:02.891550 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 14 09:03:02.899779 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 14 09:03:02.968662 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 14 09:03:02.990834 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 14 09:03:03.017630 systemd-networkd[768]: lo: Link UP Feb 14 09:03:03.017641 systemd-networkd[768]: lo: Gained carrier Feb 14 09:03:03.018326 systemd-networkd[768]: Enumeration completed Feb 14 09:03:03.018529 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 14 09:03:03.018914 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 09:03:03.018917 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 09:03:03.019665 systemd-networkd[768]: eth0: Link UP Feb 14 09:03:03.019668 systemd-networkd[768]: eth0: Gained carrier Feb 14 09:03:03.024933 ignition[664]: Ignition 2.19.0 Feb 14 09:03:03.019675 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 09:03:03.024940 ignition[664]: Stage: fetch-offline Feb 14 09:03:03.019948 systemd[1]: Reached target network.target - Network. Feb 14 09:03:03.024980 ignition[664]: no configs at "/usr/lib/ignition/base.d" Feb 14 09:03:03.024988 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 14 09:03:03.025170 ignition[664]: parsed url from cmdline: "" Feb 14 09:03:03.025173 ignition[664]: no config URL provided Feb 14 09:03:03.025177 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Feb 14 09:03:03.025184 ignition[664]: no config at "/usr/lib/ignition/user.ign" Feb 14 09:03:03.025207 ignition[664]: op(1): [started] loading QEMU firmware config module Feb 14 09:03:03.025211 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 14 09:03:03.035505 ignition[664]: op(1): [finished] loading QEMU firmware config module Feb 14 09:03:03.042645 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 14 09:03:03.056812 ignition[664]: parsing config with SHA512: 3b2a5d92c541823ccfb8cbaf4b57aea923a30f570630d80da9d55eaed00ff795eceecaf5c6bd2424c9783b3b3121128ce3681f1bf4f671bcd48e4ee2496cc538 Feb 14 09:03:03.063584 unknown[664]: fetched base config from "system" Feb 14 09:03:03.063607 unknown[664]: fetched user config from "qemu" Feb 14 09:03:03.064063 ignition[664]: fetch-offline: fetch-offline passed Feb 14 09:03:03.064134 ignition[664]: Ignition finished successfully Feb 14 09:03:03.066659 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 14 09:03:03.068008 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 14 09:03:03.076754 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 14 09:03:03.089459 ignition[774]: Ignition 2.19.0 Feb 14 09:03:03.089470 ignition[774]: Stage: kargs Feb 14 09:03:03.089697 ignition[774]: no configs at "/usr/lib/ignition/base.d" Feb 14 09:03:03.089707 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 14 09:03:03.090702 ignition[774]: kargs: kargs passed Feb 14 09:03:03.090749 ignition[774]: Ignition finished successfully Feb 14 09:03:03.094223 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 14 09:03:03.104765 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 14 09:03:03.116206 ignition[782]: Ignition 2.19.0 Feb 14 09:03:03.116217 ignition[782]: Stage: disks Feb 14 09:03:03.116379 ignition[782]: no configs at "/usr/lib/ignition/base.d" Feb 14 09:03:03.116388 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 14 09:03:03.117247 ignition[782]: disks: disks passed Feb 14 09:03:03.118793 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 14 09:03:03.117293 ignition[782]: Ignition finished successfully Feb 14 09:03:03.121644 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 14 09:03:03.122534 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 14 09:03:03.124217 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 14 09:03:03.125721 systemd[1]: Reached target sysinit.target - System Initialization. Feb 14 09:03:03.127069 systemd[1]: Reached target basic.target - Basic System. Feb 14 09:03:03.139728 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 14 09:03:03.147659 systemd-resolved[278]: Detected conflict on linux IN A 10.0.0.7 Feb 14 09:03:03.147674 systemd-resolved[278]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Feb 14 09:03:03.149994 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 14 09:03:03.152465 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 14 09:03:03.154811 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 14 09:03:03.198628 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 14 09:03:03.199057 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 14 09:03:03.200172 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 14 09:03:03.209692 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 14 09:03:03.211169 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 14 09:03:03.212293 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 14 09:03:03.212331 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 14 09:03:03.217610 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Feb 14 09:03:03.212353 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 14 09:03:03.221322 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 09:03:03.221341 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 14 09:03:03.221351 kernel: BTRFS info (device vda6): using free space tree Feb 14 09:03:03.221361 kernel: BTRFS info (device vda6): auto enabling async discard Feb 14 09:03:03.218961 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 14 09:03:03.222910 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 14 09:03:03.224589 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 14 09:03:03.264395 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Feb 14 09:03:03.268805 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Feb 14 09:03:03.272673 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Feb 14 09:03:03.275914 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Feb 14 09:03:03.350718 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 14 09:03:03.363731 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 14 09:03:03.365175 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 14 09:03:03.370622 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 09:03:03.384684 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 14 09:03:03.388436 ignition[914]: INFO : Ignition 2.19.0 Feb 14 09:03:03.388436 ignition[914]: INFO : Stage: mount Feb 14 09:03:03.390668 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 09:03:03.390668 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 14 09:03:03.390668 ignition[914]: INFO : mount: mount passed Feb 14 09:03:03.390668 ignition[914]: INFO : Ignition finished successfully Feb 14 09:03:03.391153 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 14 09:03:03.399726 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 14 09:03:03.850832 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 14 09:03:03.859763 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 14 09:03:03.864611 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (929) Feb 14 09:03:03.867048 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 09:03:03.867076 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 14 09:03:03.867087 kernel: BTRFS info (device vda6): using free space tree Feb 14 09:03:03.869622 kernel: BTRFS info (device vda6): auto enabling async discard Feb 14 09:03:03.870219 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 14 09:03:03.885511 ignition[946]: INFO : Ignition 2.19.0 Feb 14 09:03:03.885511 ignition[946]: INFO : Stage: files Feb 14 09:03:03.886722 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 09:03:03.886722 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 14 09:03:03.886722 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Feb 14 09:03:03.889275 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 14 09:03:03.889275 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 14 09:03:03.891612 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 14 09:03:03.892618 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 14 09:03:03.892618 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 14 09:03:03.892111 unknown[946]: wrote ssh authorized keys file for user: core Feb 14 09:03:03.895333 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 14 09:03:03.895333 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 14 09:03:04.124267 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 14 09:03:04.709764 systemd-networkd[768]: eth0: Gained IPv6LL Feb 14 09:03:05.559979 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 14 09:03:05.559979 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 14 09:03:05.563252 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 14 09:03:05.563252 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 14 09:03:05.563252 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 14 09:03:05.563252 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 14 09:03:05.563252 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 14 09:03:05.563252 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 14 09:03:05.563252 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 14 09:03:05.563252 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 14 09:03:05.563252 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 14 09:03:05.563252 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 14 09:03:05.563252 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 14 09:03:05.563252 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 14 09:03:05.563252 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 14 09:03:05.894187 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 14 09:03:06.363747 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 14 09:03:06.363747 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 14 09:03:06.366641 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 14 09:03:06.366641 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 14 09:03:06.366641 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 14 09:03:06.366641 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 14 09:03:06.366641 ignition[946]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 14 09:03:06.366641 ignition[946]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 14 09:03:06.366641 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 14 09:03:06.366641 ignition[946]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 14 09:03:06.390412 ignition[946]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 14 09:03:06.394218 ignition[946]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 14 09:03:06.396605 ignition[946]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 14 09:03:06.396605 ignition[946]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 14 09:03:06.396605 ignition[946]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 14 09:03:06.396605 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 14 09:03:06.396605 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 14 09:03:06.396605 ignition[946]: INFO : files: files passed Feb 14 09:03:06.396605 ignition[946]: INFO : Ignition finished successfully Feb 14 09:03:06.396958 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 14 09:03:06.404766 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 14 09:03:06.406982 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 14 09:03:06.408679 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 14 09:03:06.408765 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 14 09:03:06.414364 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Feb 14 09:03:06.417684 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 14 09:03:06.417684 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 14 09:03:06.420521 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 14 09:03:06.421732 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 14 09:03:06.423740 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 14 09:03:06.434772 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 14 09:03:06.454940 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 14 09:03:06.455053 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 14 09:03:06.456824 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 14 09:03:06.458359 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 14 09:03:06.459880 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 14 09:03:06.460723 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 14 09:03:06.476697 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 14 09:03:06.479801 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 14 09:03:06.490989 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 14 09:03:06.491965 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 09:03:06.493520 systemd[1]: Stopped target timers.target - Timer Units. Feb 14 09:03:06.494937 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 14 09:03:06.495058 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 14 09:03:06.497115 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 14 09:03:06.498691 systemd[1]: Stopped target basic.target - Basic System. Feb 14 09:03:06.500060 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 14 09:03:06.501474 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 14 09:03:06.503116 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 14 09:03:06.504692 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 14 09:03:06.506185 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 14 09:03:06.507750 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 14 09:03:06.509318 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 14 09:03:06.510712 systemd[1]: Stopped target swap.target - Swaps. Feb 14 09:03:06.511927 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 14 09:03:06.512045 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 14 09:03:06.513942 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 14 09:03:06.515496 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 09:03:06.517060 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 14 09:03:06.518652 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 09:03:06.519641 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 14 09:03:06.519751 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 14 09:03:06.522158 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 14 09:03:06.522275 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 14 09:03:06.523825 systemd[1]: Stopped target paths.target - Path Units. Feb 14 09:03:06.525060 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 14 09:03:06.528644 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 09:03:06.529685 systemd[1]: Stopped target slices.target - Slice Units. Feb 14 09:03:06.531412 systemd[1]: Stopped target sockets.target - Socket Units. Feb 14 09:03:06.532687 systemd[1]: iscsid.socket: Deactivated successfully. Feb 14 09:03:06.532769 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 14 09:03:06.534046 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 14 09:03:06.534125 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 14 09:03:06.535381 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 14 09:03:06.535482 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 14 09:03:06.536882 systemd[1]: ignition-files.service: Deactivated successfully. Feb 14 09:03:06.536976 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 14 09:03:06.548845 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 14 09:03:06.550236 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 14 09:03:06.550926 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 14 09:03:06.551033 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 09:03:06.552500 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 14 09:03:06.552697 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 14 09:03:06.557126 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 14 09:03:06.558092 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 14 09:03:06.561925 ignition[1001]: INFO : Ignition 2.19.0 Feb 14 09:03:06.562762 ignition[1001]: INFO : Stage: umount Feb 14 09:03:06.562762 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 09:03:06.562762 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 14 09:03:06.565625 ignition[1001]: INFO : umount: umount passed Feb 14 09:03:06.565625 ignition[1001]: INFO : Ignition finished successfully Feb 14 09:03:06.564923 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 14 09:03:06.565358 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 14 09:03:06.565440 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 14 09:03:06.566808 systemd[1]: Stopped target network.target - Network. Feb 14 09:03:06.567833 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 14 09:03:06.567903 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 14 09:03:06.572663 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 14 09:03:06.572711 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 14 09:03:06.574028 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 14 09:03:06.574067 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 14 09:03:06.575487 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 14 09:03:06.575528 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 14 09:03:06.578143 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 14 09:03:06.579460 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 14 09:03:06.587682 systemd-networkd[768]: eth0: DHCPv6 lease lost Feb 14 09:03:06.590225 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 14 09:03:06.590333 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 14 09:03:06.592204 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 14 09:03:06.592322 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 14 09:03:06.594386 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 14 09:03:06.594434 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 14 09:03:06.599757 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 14 09:03:06.600448 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 14 09:03:06.600502 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 14 09:03:06.602195 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 14 09:03:06.602236 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 14 09:03:06.603625 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 14 09:03:06.603670 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 14 09:03:06.605431 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 14 09:03:06.605465 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 09:03:06.607064 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 09:03:06.617123 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 14 09:03:06.617233 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 14 09:03:06.623236 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 14 09:03:06.623382 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 09:03:06.626023 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 14 09:03:06.626077 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 14 09:03:06.627005 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 14 09:03:06.627034 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 09:03:06.628688 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 14 09:03:06.628739 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 14 09:03:06.630932 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 14 09:03:06.630991 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 14 09:03:06.634437 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 14 09:03:06.634510 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 09:03:06.643730 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 14 09:03:06.644550 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 14 09:03:06.644625 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 09:03:06.646406 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 14 09:03:06.646444 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 09:03:06.648041 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 14 09:03:06.648075 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 09:03:06.650106 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 09:03:06.650150 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 09:03:06.652184 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 14 09:03:06.652273 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 14 09:03:06.654641 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 14 09:03:06.654735 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 14 09:03:06.657160 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 14 09:03:06.658389 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 14 09:03:06.658459 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 14 09:03:06.660844 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 14 09:03:06.670958 systemd[1]: Switching root. Feb 14 09:03:06.703644 systemd-journald[237]: Journal stopped Feb 14 09:03:07.470844 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 14 09:03:07.470899 kernel: SELinux: policy capability network_peer_controls=1 Feb 14 09:03:07.470915 kernel: SELinux: policy capability open_perms=1 Feb 14 09:03:07.470925 kernel: SELinux: policy capability extended_socket_class=1 Feb 14 09:03:07.470935 kernel: SELinux: policy capability always_check_network=0 Feb 14 09:03:07.470944 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 14 09:03:07.470954 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 14 09:03:07.470964 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 14 09:03:07.470974 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 14 09:03:07.470983 kernel: audit: type=1403 audit(1739523786.897:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 14 09:03:07.470997 systemd[1]: Successfully loaded SELinux policy in 31.518ms. Feb 14 09:03:07.471017 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.504ms. Feb 14 09:03:07.471029 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 14 09:03:07.471044 systemd[1]: Detected virtualization kvm. Feb 14 09:03:07.471055 systemd[1]: Detected architecture arm64. Feb 14 09:03:07.471065 systemd[1]: Detected first boot. Feb 14 09:03:07.471082 systemd[1]: Initializing machine ID from VM UUID. Feb 14 09:03:07.471101 zram_generator::config[1046]: No configuration found. Feb 14 09:03:07.471113 systemd[1]: Populated /etc with preset unit settings. Feb 14 09:03:07.471125 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 14 09:03:07.471137 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 14 09:03:07.471148 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 14 09:03:07.471159 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 14 09:03:07.471170 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 14 09:03:07.471181 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 14 09:03:07.471200 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 14 09:03:07.471212 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 14 09:03:07.471225 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 14 09:03:07.471236 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 14 09:03:07.471246 systemd[1]: Created slice user.slice - User and Session Slice. Feb 14 09:03:07.471257 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 09:03:07.471268 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 09:03:07.471279 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 14 09:03:07.471290 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 14 09:03:07.471302 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 14 09:03:07.471312 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 14 09:03:07.471325 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 14 09:03:07.471336 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 09:03:07.471347 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 14 09:03:07.471358 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 14 09:03:07.471369 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 14 09:03:07.471380 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 14 09:03:07.471391 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 09:03:07.471401 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 14 09:03:07.471414 systemd[1]: Reached target slices.target - Slice Units. Feb 14 09:03:07.471425 systemd[1]: Reached target swap.target - Swaps. Feb 14 09:03:07.471444 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 14 09:03:07.471464 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 14 09:03:07.471475 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 14 09:03:07.471486 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 14 09:03:07.471496 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 09:03:07.471507 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 14 09:03:07.471518 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 14 09:03:07.471530 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 14 09:03:07.471543 systemd[1]: Mounting media.mount - External Media Directory... Feb 14 09:03:07.471554 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 14 09:03:07.471569 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 14 09:03:07.471581 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 14 09:03:07.471603 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 14 09:03:07.471621 systemd[1]: Reached target machines.target - Containers. Feb 14 09:03:07.471633 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 14 09:03:07.471645 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 09:03:07.471656 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 14 09:03:07.471667 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 14 09:03:07.471678 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 14 09:03:07.471689 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 14 09:03:07.471700 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 14 09:03:07.471711 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 14 09:03:07.471721 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 14 09:03:07.471732 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 14 09:03:07.471746 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 14 09:03:07.471757 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 14 09:03:07.471767 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 14 09:03:07.471778 systemd[1]: Stopped systemd-fsck-usr.service. Feb 14 09:03:07.471788 kernel: loop: module loaded Feb 14 09:03:07.471800 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 14 09:03:07.471810 kernel: ACPI: bus type drm_connector registered Feb 14 09:03:07.471823 kernel: fuse: init (API version 7.39) Feb 14 09:03:07.471834 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 14 09:03:07.471846 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 14 09:03:07.471858 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 14 09:03:07.471869 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 14 09:03:07.471880 systemd[1]: verity-setup.service: Deactivated successfully. Feb 14 09:03:07.471890 systemd[1]: Stopped verity-setup.service. Feb 14 09:03:07.471901 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 14 09:03:07.471912 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 14 09:03:07.471922 systemd[1]: Mounted media.mount - External Media Directory. Feb 14 09:03:07.471933 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 14 09:03:07.471963 systemd-journald[1117]: Collecting audit messages is disabled. Feb 14 09:03:07.471985 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 14 09:03:07.471997 systemd-journald[1117]: Journal started Feb 14 09:03:07.472021 systemd-journald[1117]: Runtime Journal (/run/log/journal/7e91918f010a47568b7d3abc44fce8c1) is 5.9M, max 47.3M, 41.4M free. Feb 14 09:03:07.268672 systemd[1]: Queued start job for default target multi-user.target. Feb 14 09:03:07.286558 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 14 09:03:07.286937 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 14 09:03:07.473109 systemd[1]: Started systemd-journald.service - Journal Service. Feb 14 09:03:07.474332 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 14 09:03:07.475456 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 14 09:03:07.477745 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 09:03:07.479359 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 14 09:03:07.479547 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 14 09:03:07.481076 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 14 09:03:07.481227 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 14 09:03:07.482725 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 14 09:03:07.482874 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 14 09:03:07.484242 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 14 09:03:07.484413 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 14 09:03:07.486041 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 14 09:03:07.486185 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 14 09:03:07.487800 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 14 09:03:07.487949 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 14 09:03:07.489388 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 14 09:03:07.491028 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 14 09:03:07.492680 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 14 09:03:07.506085 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 14 09:03:07.513682 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 14 09:03:07.515900 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 14 09:03:07.517022 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 14 09:03:07.517062 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 14 09:03:07.519066 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 14 09:03:07.521060 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 14 09:03:07.522943 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 14 09:03:07.523836 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 09:03:07.525329 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 14 09:03:07.527113 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 14 09:03:07.528120 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 14 09:03:07.531768 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 14 09:03:07.532787 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 14 09:03:07.534445 systemd-journald[1117]: Time spent on flushing to /var/log/journal/7e91918f010a47568b7d3abc44fce8c1 is 20.578ms for 855 entries. Feb 14 09:03:07.534445 systemd-journald[1117]: System Journal (/var/log/journal/7e91918f010a47568b7d3abc44fce8c1) is 8.0M, max 195.6M, 187.6M free. Feb 14 09:03:07.568797 systemd-journald[1117]: Received client request to flush runtime journal. Feb 14 09:03:07.568859 kernel: loop0: detected capacity change from 0 to 114328 Feb 14 09:03:07.536851 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 14 09:03:07.540866 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 14 09:03:07.544889 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 14 09:03:07.549652 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 09:03:07.550792 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 14 09:03:07.551997 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 14 09:03:07.553228 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 14 09:03:07.554417 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 14 09:03:07.564989 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 14 09:03:07.569848 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 14 09:03:07.574959 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 14 09:03:07.576421 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 14 09:03:07.577879 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 14 09:03:07.585620 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 14 09:03:07.593427 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Feb 14 09:03:07.593444 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Feb 14 09:03:07.595227 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 14 09:03:07.596643 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 14 09:03:07.597863 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 14 09:03:07.599871 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 09:03:07.606784 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 14 09:03:07.622623 kernel: loop1: detected capacity change from 0 to 114432 Feb 14 09:03:07.638093 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 14 09:03:07.646753 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 14 09:03:07.650616 kernel: loop2: detected capacity change from 0 to 194096 Feb 14 09:03:07.660674 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Feb 14 09:03:07.660692 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Feb 14 09:03:07.665158 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 09:03:07.696621 kernel: loop3: detected capacity change from 0 to 114328 Feb 14 09:03:07.702057 kernel: loop4: detected capacity change from 0 to 114432 Feb 14 09:03:07.706614 kernel: loop5: detected capacity change from 0 to 194096 Feb 14 09:03:07.710391 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 14 09:03:07.710875 (sd-merge)[1185]: Merged extensions into '/usr'. Feb 14 09:03:07.714420 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Feb 14 09:03:07.714437 systemd[1]: Reloading... Feb 14 09:03:07.768620 zram_generator::config[1208]: No configuration found. Feb 14 09:03:07.822268 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 14 09:03:07.872309 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 09:03:07.908037 systemd[1]: Reloading finished in 193 ms. Feb 14 09:03:07.949275 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 14 09:03:07.950537 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 14 09:03:07.962948 systemd[1]: Starting ensure-sysext.service... Feb 14 09:03:07.964668 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 14 09:03:07.974521 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Feb 14 09:03:07.974535 systemd[1]: Reloading... Feb 14 09:03:07.990168 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 14 09:03:07.990424 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 14 09:03:07.991106 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 14 09:03:07.991324 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Feb 14 09:03:07.991374 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Feb 14 09:03:07.993504 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Feb 14 09:03:07.993518 systemd-tmpfiles[1246]: Skipping /boot Feb 14 09:03:08.000555 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Feb 14 09:03:08.000578 systemd-tmpfiles[1246]: Skipping /boot Feb 14 09:03:08.019627 zram_generator::config[1273]: No configuration found. Feb 14 09:03:08.103088 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 09:03:08.139092 systemd[1]: Reloading finished in 164 ms. Feb 14 09:03:08.156638 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 14 09:03:08.170006 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 09:03:08.177907 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 14 09:03:08.180114 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 14 09:03:08.182869 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 14 09:03:08.185958 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 14 09:03:08.194948 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 09:03:08.198917 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 14 09:03:08.202318 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 09:03:08.204941 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 14 09:03:08.208200 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 14 09:03:08.210325 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 14 09:03:08.211283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 09:03:08.215880 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 14 09:03:08.218655 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 14 09:03:08.221241 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 14 09:03:08.221372 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 14 09:03:08.222928 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 14 09:03:08.223073 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 14 09:03:08.226447 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 14 09:03:08.226669 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 14 09:03:08.236014 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 09:03:08.238659 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Feb 14 09:03:08.247954 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 14 09:03:08.250357 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 14 09:03:08.253874 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 14 09:03:08.260744 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 14 09:03:08.261536 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 09:03:08.264090 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 14 09:03:08.266699 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 09:03:08.269284 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 14 09:03:08.271203 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 14 09:03:08.274095 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 14 09:03:08.274237 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 14 09:03:08.275555 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 14 09:03:08.275704 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 14 09:03:08.277085 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 14 09:03:08.277198 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 14 09:03:08.281233 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 14 09:03:08.282979 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 14 09:03:08.284964 augenrules[1358]: No rules Feb 14 09:03:08.286308 systemd[1]: Finished ensure-sysext.service. Feb 14 09:03:08.287131 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 14 09:03:08.288364 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 14 09:03:08.293059 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 14 09:03:08.310635 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1348) Feb 14 09:03:08.316838 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 14 09:03:08.317587 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 14 09:03:08.317688 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 14 09:03:08.321976 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 14 09:03:08.324735 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 14 09:03:08.329964 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 14 09:03:08.347833 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 14 09:03:08.349796 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 14 09:03:08.363279 systemd-resolved[1313]: Positive Trust Anchors: Feb 14 09:03:08.363292 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 14 09:03:08.363326 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 14 09:03:08.377945 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 14 09:03:08.386639 systemd-resolved[1313]: Defaulting to hostname 'linux'. Feb 14 09:03:08.388421 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 14 09:03:08.389439 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 14 09:03:08.406516 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 14 09:03:08.407936 systemd[1]: Reached target time-set.target - System Time Set. Feb 14 09:03:08.415071 systemd-networkd[1382]: lo: Link UP Feb 14 09:03:08.415078 systemd-networkd[1382]: lo: Gained carrier Feb 14 09:03:08.415874 systemd-networkd[1382]: Enumeration completed Feb 14 09:03:08.416050 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 14 09:03:08.416756 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 09:03:08.416760 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 09:03:08.417004 systemd[1]: Reached target network.target - Network. Feb 14 09:03:08.417383 systemd-networkd[1382]: eth0: Link UP Feb 14 09:03:08.417392 systemd-networkd[1382]: eth0: Gained carrier Feb 14 09:03:08.417404 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 14 09:03:08.425344 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 14 09:03:08.433420 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 09:03:08.433673 systemd-networkd[1382]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 14 09:03:08.434413 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Feb 14 09:03:08.434948 systemd-timesyncd[1383]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 14 09:03:08.435005 systemd-timesyncd[1383]: Initial clock synchronization to Fri 2025-02-14 09:03:08.192814 UTC. Feb 14 09:03:08.445954 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 14 09:03:08.448817 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 14 09:03:08.469672 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 14 09:03:08.475430 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 09:03:08.504101 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 14 09:03:08.505272 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 14 09:03:08.506148 systemd[1]: Reached target sysinit.target - System Initialization. Feb 14 09:03:08.507022 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 14 09:03:08.507955 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 14 09:03:08.509031 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 14 09:03:08.509916 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 14 09:03:08.510990 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 14 09:03:08.511951 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 14 09:03:08.511979 systemd[1]: Reached target paths.target - Path Units. Feb 14 09:03:08.512664 systemd[1]: Reached target timers.target - Timer Units. Feb 14 09:03:08.514052 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 14 09:03:08.516260 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 14 09:03:08.525566 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 14 09:03:08.527431 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 14 09:03:08.528720 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 14 09:03:08.529705 systemd[1]: Reached target sockets.target - Socket Units. Feb 14 09:03:08.530398 systemd[1]: Reached target basic.target - Basic System. Feb 14 09:03:08.531165 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 14 09:03:08.531193 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 14 09:03:08.532063 systemd[1]: Starting containerd.service - containerd container runtime... Feb 14 09:03:08.533850 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 14 09:03:08.535680 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 14 09:03:08.536761 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 14 09:03:08.539495 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 14 09:03:08.544376 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 14 09:03:08.548704 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 14 09:03:08.554538 jq[1411]: false Feb 14 09:03:08.550406 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 14 09:03:08.554481 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 14 09:03:08.560216 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 14 09:03:08.560959 extend-filesystems[1412]: Found loop3 Feb 14 09:03:08.561786 extend-filesystems[1412]: Found loop4 Feb 14 09:03:08.561786 extend-filesystems[1412]: Found loop5 Feb 14 09:03:08.561786 extend-filesystems[1412]: Found vda Feb 14 09:03:08.561786 extend-filesystems[1412]: Found vda1 Feb 14 09:03:08.561786 extend-filesystems[1412]: Found vda2 Feb 14 09:03:08.561786 extend-filesystems[1412]: Found vda3 Feb 14 09:03:08.561786 extend-filesystems[1412]: Found usr Feb 14 09:03:08.561786 extend-filesystems[1412]: Found vda4 Feb 14 09:03:08.561786 extend-filesystems[1412]: Found vda6 Feb 14 09:03:08.561786 extend-filesystems[1412]: Found vda7 Feb 14 09:03:08.561786 extend-filesystems[1412]: Found vda9 Feb 14 09:03:08.561786 extend-filesystems[1412]: Checking size of /dev/vda9 Feb 14 09:03:08.580648 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 14 09:03:08.565700 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 14 09:03:08.580808 extend-filesystems[1412]: Resized partition /dev/vda9 Feb 14 09:03:08.579621 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 14 09:03:08.583400 extend-filesystems[1431]: resize2fs 1.47.1 (20-May-2024) Feb 14 09:03:08.590879 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1356) Feb 14 09:03:08.580127 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 14 09:03:08.585690 dbus-daemon[1410]: [system] SELinux support is enabled Feb 14 09:03:08.587793 systemd[1]: Starting update-engine.service - Update Engine... Feb 14 09:03:08.593730 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 14 09:03:08.598181 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 14 09:03:08.601647 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 14 09:03:08.605714 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 14 09:03:08.608926 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 14 09:03:08.609088 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 14 09:03:08.609333 systemd[1]: motdgen.service: Deactivated successfully. Feb 14 09:03:08.609482 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 14 09:03:08.611227 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 14 09:03:08.611364 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 14 09:03:08.623602 extend-filesystems[1431]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 14 09:03:08.623602 extend-filesystems[1431]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 14 09:03:08.623602 extend-filesystems[1431]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 14 09:03:08.628716 extend-filesystems[1412]: Resized filesystem in /dev/vda9 Feb 14 09:03:08.629370 jq[1433]: true Feb 14 09:03:08.629744 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 14 09:03:08.630724 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 14 09:03:08.633263 (ntainerd)[1437]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 14 09:03:08.646318 tar[1436]: linux-arm64/helm Feb 14 09:03:08.647971 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 14 09:03:08.648008 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 14 09:03:08.651705 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 14 09:03:08.651736 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 14 09:03:08.656431 update_engine[1432]: I20250214 09:03:08.656213 1432 main.cc:92] Flatcar Update Engine starting Feb 14 09:03:08.660618 jq[1446]: true Feb 14 09:03:08.661862 update_engine[1432]: I20250214 09:03:08.661804 1432 update_check_scheduler.cc:74] Next update check in 5m58s Feb 14 09:03:08.662031 systemd[1]: Started update-engine.service - Update Engine. Feb 14 09:03:08.669740 systemd-logind[1422]: Watching system buttons on /dev/input/event0 (Power Button) Feb 14 09:03:08.670476 systemd-logind[1422]: New seat seat0. Feb 14 09:03:08.672743 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 14 09:03:08.676235 systemd[1]: Started systemd-logind.service - User Login Management. Feb 14 09:03:08.682947 sshd_keygen[1430]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 14 09:03:08.717445 bash[1471]: Updated "/home/core/.ssh/authorized_keys" Feb 14 09:03:08.719571 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 14 09:03:08.721518 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 14 09:03:08.728887 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 14 09:03:08.729814 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 14 09:03:08.736434 systemd[1]: issuegen.service: Deactivated successfully. Feb 14 09:03:08.736634 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 14 09:03:08.736688 locksmithd[1452]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 14 09:03:08.741824 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 14 09:03:08.752613 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 14 09:03:08.764916 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 14 09:03:08.767272 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 14 09:03:08.768757 systemd[1]: Reached target getty.target - Login Prompts. Feb 14 09:03:08.835397 containerd[1437]: time="2025-02-14T09:03:08.835275400Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 14 09:03:08.858289 containerd[1437]: time="2025-02-14T09:03:08.858188840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 14 09:03:08.859513 containerd[1437]: time="2025-02-14T09:03:08.859477280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 14 09:03:08.859513 containerd[1437]: time="2025-02-14T09:03:08.859508200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 14 09:03:08.859575 containerd[1437]: time="2025-02-14T09:03:08.859522920Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 14 09:03:08.859720 containerd[1437]: time="2025-02-14T09:03:08.859692000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 14 09:03:08.859720 containerd[1437]: time="2025-02-14T09:03:08.859717120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 14 09:03:08.859794 containerd[1437]: time="2025-02-14T09:03:08.859778200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 09:03:08.859816 containerd[1437]: time="2025-02-14T09:03:08.859794480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 14 09:03:08.859976 containerd[1437]: time="2025-02-14T09:03:08.859949720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 09:03:08.859976 containerd[1437]: time="2025-02-14T09:03:08.859970320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 14 09:03:08.860024 containerd[1437]: time="2025-02-14T09:03:08.859983440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 09:03:08.860024 containerd[1437]: time="2025-02-14T09:03:08.859993800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 14 09:03:08.860080 containerd[1437]: time="2025-02-14T09:03:08.860065240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 14 09:03:08.860264 containerd[1437]: time="2025-02-14T09:03:08.860247640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 14 09:03:08.860362 containerd[1437]: time="2025-02-14T09:03:08.860344560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 09:03:08.860389 containerd[1437]: time="2025-02-14T09:03:08.860361600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 14 09:03:08.860448 containerd[1437]: time="2025-02-14T09:03:08.860433640Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 14 09:03:08.860489 containerd[1437]: time="2025-02-14T09:03:08.860477360Z" level=info msg="metadata content store policy set" policy=shared Feb 14 09:03:08.863628 containerd[1437]: time="2025-02-14T09:03:08.863580880Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 14 09:03:08.863688 containerd[1437]: time="2025-02-14T09:03:08.863650280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 14 09:03:08.863688 containerd[1437]: time="2025-02-14T09:03:08.863666120Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 14 09:03:08.863688 containerd[1437]: time="2025-02-14T09:03:08.863680960Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 14 09:03:08.863759 containerd[1437]: time="2025-02-14T09:03:08.863693480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 14 09:03:08.863845 containerd[1437]: time="2025-02-14T09:03:08.863820920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 14 09:03:08.864884 containerd[1437]: time="2025-02-14T09:03:08.864125960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 14 09:03:08.864884 containerd[1437]: time="2025-02-14T09:03:08.864265920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 14 09:03:08.864884 containerd[1437]: time="2025-02-14T09:03:08.864283760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 14 09:03:08.864884 containerd[1437]: time="2025-02-14T09:03:08.864298280Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 14 09:03:08.864884 containerd[1437]: time="2025-02-14T09:03:08.864312720Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 14 09:03:08.864884 containerd[1437]: time="2025-02-14T09:03:08.864325840Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 14 09:03:08.864884 containerd[1437]: time="2025-02-14T09:03:08.864338360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 14 09:03:08.864884 containerd[1437]: time="2025-02-14T09:03:08.864351400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 14 09:03:08.864884 containerd[1437]: time="2025-02-14T09:03:08.864365280Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 14 09:03:08.864884 containerd[1437]: time="2025-02-14T09:03:08.864377800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 14 09:03:08.864884 containerd[1437]: time="2025-02-14T09:03:08.864389760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 14 09:03:08.864884 containerd[1437]: time="2025-02-14T09:03:08.864400920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 14 09:03:08.864884 containerd[1437]: time="2025-02-14T09:03:08.864421480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 14 09:03:08.864884 containerd[1437]: time="2025-02-14T09:03:08.864435520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 14 09:03:08.865151 containerd[1437]: time="2025-02-14T09:03:08.864447920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 14 09:03:08.865151 containerd[1437]: time="2025-02-14T09:03:08.864469480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 14 09:03:08.865151 containerd[1437]: time="2025-02-14T09:03:08.864485960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 14 09:03:08.865151 containerd[1437]: time="2025-02-14T09:03:08.864498720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 14 09:03:08.865151 containerd[1437]: time="2025-02-14T09:03:08.864509920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 14 09:03:08.865151 containerd[1437]: time="2025-02-14T09:03:08.864521640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 14 09:03:08.865151 containerd[1437]: time="2025-02-14T09:03:08.864534040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 14 09:03:08.865151 containerd[1437]: time="2025-02-14T09:03:08.864548760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 14 09:03:08.865151 containerd[1437]: time="2025-02-14T09:03:08.864571760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 14 09:03:08.865151 containerd[1437]: time="2025-02-14T09:03:08.864585000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 14 09:03:08.865151 containerd[1437]: time="2025-02-14T09:03:08.864622120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 14 09:03:08.865151 containerd[1437]: time="2025-02-14T09:03:08.864645480Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 14 09:03:08.865151 containerd[1437]: time="2025-02-14T09:03:08.864666320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 14 09:03:08.865151 containerd[1437]: time="2025-02-14T09:03:08.864677800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 14 09:03:08.865151 containerd[1437]: time="2025-02-14T09:03:08.864689560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 14 09:03:08.865879 containerd[1437]: time="2025-02-14T09:03:08.865844840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 14 09:03:08.865968 containerd[1437]: time="2025-02-14T09:03:08.865950960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 14 09:03:08.866018 containerd[1437]: time="2025-02-14T09:03:08.866006080Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 14 09:03:08.866073 containerd[1437]: time="2025-02-14T09:03:08.866058560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 14 09:03:08.866124 containerd[1437]: time="2025-02-14T09:03:08.866111560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 14 09:03:08.866196 containerd[1437]: time="2025-02-14T09:03:08.866182800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 14 09:03:08.866246 containerd[1437]: time="2025-02-14T09:03:08.866235480Z" level=info msg="NRI interface is disabled by configuration." Feb 14 09:03:08.866295 containerd[1437]: time="2025-02-14T09:03:08.866283800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 14 09:03:08.866759 containerd[1437]: time="2025-02-14T09:03:08.866694800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 14 09:03:08.866936 containerd[1437]: time="2025-02-14T09:03:08.866917200Z" level=info msg="Connect containerd service" Feb 14 09:03:08.867039 containerd[1437]: time="2025-02-14T09:03:08.867022520Z" level=info msg="using legacy CRI server" Feb 14 09:03:08.867089 containerd[1437]: time="2025-02-14T09:03:08.867076600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 14 09:03:08.867209 containerd[1437]: time="2025-02-14T09:03:08.867193040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 14 09:03:08.867891 containerd[1437]: time="2025-02-14T09:03:08.867860400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 14 09:03:08.868291 containerd[1437]: time="2025-02-14T09:03:08.868190200Z" level=info msg="Start subscribing containerd event" Feb 14 09:03:08.868291 containerd[1437]: time="2025-02-14T09:03:08.868251760Z" level=info msg="Start recovering state" Feb 14 09:03:08.868364 containerd[1437]: time="2025-02-14T09:03:08.868320240Z" level=info msg="Start event monitor" Feb 14 09:03:08.868364 containerd[1437]: time="2025-02-14T09:03:08.868330960Z" level=info msg="Start snapshots syncer" Feb 14 09:03:08.868364 containerd[1437]: time="2025-02-14T09:03:08.868341200Z" level=info msg="Start cni network conf syncer for default" Feb 14 09:03:08.868364 containerd[1437]: time="2025-02-14T09:03:08.868348640Z" level=info msg="Start streaming server" Feb 14 09:03:08.869242 containerd[1437]: time="2025-02-14T09:03:08.868639280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 14 09:03:08.869242 containerd[1437]: time="2025-02-14T09:03:08.868700280Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 14 09:03:08.869242 containerd[1437]: time="2025-02-14T09:03:08.868746880Z" level=info msg="containerd successfully booted in 0.034897s" Feb 14 09:03:08.868844 systemd[1]: Started containerd.service - containerd container runtime. Feb 14 09:03:08.988535 tar[1436]: linux-arm64/LICENSE Feb 14 09:03:08.988535 tar[1436]: linux-arm64/README.md Feb 14 09:03:09.001843 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 14 09:03:09.829751 systemd-networkd[1382]: eth0: Gained IPv6LL Feb 14 09:03:09.831880 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 14 09:03:09.833757 systemd[1]: Reached target network-online.target - Network is Online. Feb 14 09:03:09.843821 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 14 09:03:09.845912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 09:03:09.847624 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 14 09:03:09.861344 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 14 09:03:09.862302 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 14 09:03:09.864091 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 14 09:03:09.865913 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 14 09:03:10.317102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 09:03:10.318312 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 14 09:03:10.322266 (kubelet)[1523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 09:03:10.324159 systemd[1]: Startup finished in 624ms (kernel) + 6.170s (initrd) + 3.459s (userspace) = 10.253s. Feb 14 09:03:10.777764 kubelet[1523]: E0214 09:03:10.777638 1523 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 09:03:10.780047 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 09:03:10.780204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 09:03:13.249414 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 14 09:03:13.250486 systemd[1]: Started sshd@0-10.0.0.7:22-10.0.0.1:52170.service - OpenSSH per-connection server daemon (10.0.0.1:52170). Feb 14 09:03:13.300782 sshd[1538]: Accepted publickey for core from 10.0.0.1 port 52170 ssh2: RSA SHA256:nkzhV86wH9QcDRurhp7rPRyA4ZaXT3UfdFDNqPx4HW0 Feb 14 09:03:13.302380 sshd[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 09:03:13.313630 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 14 09:03:13.323844 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 14 09:03:13.325465 systemd-logind[1422]: New session 1 of user core. Feb 14 09:03:13.332882 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 14 09:03:13.335230 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 14 09:03:13.341443 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 14 09:03:13.415968 systemd[1542]: Queued start job for default target default.target. Feb 14 09:03:13.424540 systemd[1542]: Created slice app.slice - User Application Slice. Feb 14 09:03:13.424591 systemd[1542]: Reached target paths.target - Paths. Feb 14 09:03:13.424622 systemd[1542]: Reached target timers.target - Timers. Feb 14 09:03:13.425889 systemd[1542]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 14 09:03:13.435834 systemd[1542]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 14 09:03:13.435899 systemd[1542]: Reached target sockets.target - Sockets. Feb 14 09:03:13.435911 systemd[1542]: Reached target basic.target - Basic System. Feb 14 09:03:13.435947 systemd[1542]: Reached target default.target - Main User Target. Feb 14 09:03:13.435976 systemd[1542]: Startup finished in 87ms. Feb 14 09:03:13.436236 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 14 09:03:13.437419 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 14 09:03:13.498197 systemd[1]: Started sshd@1-10.0.0.7:22-10.0.0.1:52178.service - OpenSSH per-connection server daemon (10.0.0.1:52178). Feb 14 09:03:13.533038 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 52178 ssh2: RSA SHA256:nkzhV86wH9QcDRurhp7rPRyA4ZaXT3UfdFDNqPx4HW0 Feb 14 09:03:13.534316 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 09:03:13.538707 systemd-logind[1422]: New session 2 of user core. Feb 14 09:03:13.551771 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 14 09:03:13.603404 sshd[1553]: pam_unix(sshd:session): session closed for user core Feb 14 09:03:13.618040 systemd[1]: sshd@1-10.0.0.7:22-10.0.0.1:52178.service: Deactivated successfully. Feb 14 09:03:13.619394 systemd[1]: session-2.scope: Deactivated successfully. Feb 14 09:03:13.621647 systemd-logind[1422]: Session 2 logged out. Waiting for processes to exit. Feb 14 09:03:13.621933 systemd[1]: Started sshd@2-10.0.0.7:22-10.0.0.1:52192.service - OpenSSH per-connection server daemon (10.0.0.1:52192). Feb 14 09:03:13.623666 systemd-logind[1422]: Removed session 2. Feb 14 09:03:13.657232 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 52192 ssh2: RSA SHA256:nkzhV86wH9QcDRurhp7rPRyA4ZaXT3UfdFDNqPx4HW0 Feb 14 09:03:13.657982 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 09:03:13.663655 systemd-logind[1422]: New session 3 of user core. Feb 14 09:03:13.675784 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 14 09:03:13.723539 sshd[1560]: pam_unix(sshd:session): session closed for user core Feb 14 09:03:13.733902 systemd[1]: sshd@2-10.0.0.7:22-10.0.0.1:52192.service: Deactivated successfully. Feb 14 09:03:13.735518 systemd[1]: session-3.scope: Deactivated successfully. Feb 14 09:03:13.737484 systemd-logind[1422]: Session 3 logged out. Waiting for processes to exit. Feb 14 09:03:13.746830 systemd[1]: Started sshd@3-10.0.0.7:22-10.0.0.1:52204.service - OpenSSH per-connection server daemon (10.0.0.1:52204). Feb 14 09:03:13.747567 systemd-logind[1422]: Removed session 3. Feb 14 09:03:13.777399 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 52204 ssh2: RSA SHA256:nkzhV86wH9QcDRurhp7rPRyA4ZaXT3UfdFDNqPx4HW0 Feb 14 09:03:13.778604 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 09:03:13.782023 systemd-logind[1422]: New session 4 of user core. Feb 14 09:03:13.787781 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 14 09:03:13.838697 sshd[1567]: pam_unix(sshd:session): session closed for user core Feb 14 09:03:13.850912 systemd[1]: sshd@3-10.0.0.7:22-10.0.0.1:52204.service: Deactivated successfully. Feb 14 09:03:13.852318 systemd[1]: session-4.scope: Deactivated successfully. Feb 14 09:03:13.854721 systemd-logind[1422]: Session 4 logged out. Waiting for processes to exit. Feb 14 09:03:13.856289 systemd[1]: Started sshd@4-10.0.0.7:22-10.0.0.1:52216.service - OpenSSH per-connection server daemon (10.0.0.1:52216). Feb 14 09:03:13.857326 systemd-logind[1422]: Removed session 4. Feb 14 09:03:13.891237 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 52216 ssh2: RSA SHA256:nkzhV86wH9QcDRurhp7rPRyA4ZaXT3UfdFDNqPx4HW0 Feb 14 09:03:13.892556 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 09:03:13.897654 systemd-logind[1422]: New session 5 of user core. Feb 14 09:03:13.911753 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 14 09:03:13.973900 sudo[1577]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 14 09:03:13.974186 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 09:03:14.317806 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 14 09:03:14.317962 (dockerd)[1595]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 14 09:03:14.621253 dockerd[1595]: time="2025-02-14T09:03:14.621126580Z" level=info msg="Starting up" Feb 14 09:03:14.812122 dockerd[1595]: time="2025-02-14T09:03:14.811814728Z" level=info msg="Loading containers: start." Feb 14 09:03:14.955624 kernel: Initializing XFRM netlink socket Feb 14 09:03:15.030403 systemd-networkd[1382]: docker0: Link UP Feb 14 09:03:15.046973 dockerd[1595]: time="2025-02-14T09:03:15.046923368Z" level=info msg="Loading containers: done." Feb 14 09:03:15.061475 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2739506815-merged.mount: Deactivated successfully. Feb 14 09:03:15.063966 dockerd[1595]: time="2025-02-14T09:03:15.063827946Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 14 09:03:15.063966 dockerd[1595]: time="2025-02-14T09:03:15.063947658Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 14 09:03:15.064086 dockerd[1595]: time="2025-02-14T09:03:15.064045037Z" level=info msg="Daemon has completed initialization" Feb 14 09:03:15.092336 dockerd[1595]: time="2025-02-14T09:03:15.091652443Z" level=info msg="API listen on /run/docker.sock" Feb 14 09:03:15.091876 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 14 09:03:15.838610 containerd[1437]: time="2025-02-14T09:03:15.838561192Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 14 09:03:16.513101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2977483668.mount: Deactivated successfully. Feb 14 09:03:18.144969 containerd[1437]: time="2025-02-14T09:03:18.144923833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:18.146467 containerd[1437]: time="2025-02-14T09:03:18.146436546Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 14 09:03:18.147621 containerd[1437]: time="2025-02-14T09:03:18.147563440Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:18.150335 containerd[1437]: time="2025-02-14T09:03:18.150288028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:18.152447 containerd[1437]: time="2025-02-14T09:03:18.151835700Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.313212529s" Feb 14 09:03:18.152447 containerd[1437]: time="2025-02-14T09:03:18.151873315Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 14 09:03:18.170543 containerd[1437]: time="2025-02-14T09:03:18.170497580Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 14 09:03:20.124816 containerd[1437]: time="2025-02-14T09:03:20.124764270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:20.125384 containerd[1437]: time="2025-02-14T09:03:20.125352743Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 14 09:03:20.126212 containerd[1437]: time="2025-02-14T09:03:20.126187488Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:20.129012 containerd[1437]: time="2025-02-14T09:03:20.128982049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:20.131123 containerd[1437]: time="2025-02-14T09:03:20.131088316Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.960540653s" Feb 14 09:03:20.131184 containerd[1437]: time="2025-02-14T09:03:20.131125972Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 14 09:03:20.149151 containerd[1437]: time="2025-02-14T09:03:20.149112941Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 14 09:03:21.030545 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 14 09:03:21.039754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 09:03:21.138005 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 09:03:21.141583 (kubelet)[1830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 09:03:21.180815 kubelet[1830]: E0214 09:03:21.180760 1830 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 09:03:21.183794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 09:03:21.183940 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 09:03:21.561009 containerd[1437]: time="2025-02-14T09:03:21.560963789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:21.561926 containerd[1437]: time="2025-02-14T09:03:21.561725678Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 14 09:03:21.562818 containerd[1437]: time="2025-02-14T09:03:21.562791741Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:21.565959 containerd[1437]: time="2025-02-14T09:03:21.565909704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:21.567181 containerd[1437]: time="2025-02-14T09:03:21.567135468Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.417981288s" Feb 14 09:03:21.567181 containerd[1437]: time="2025-02-14T09:03:21.567172680Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 14 09:03:21.584816 containerd[1437]: time="2025-02-14T09:03:21.584782347Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 14 09:03:22.627065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3811415186.mount: Deactivated successfully. Feb 14 09:03:22.909900 containerd[1437]: time="2025-02-14T09:03:22.909771703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:22.911013 containerd[1437]: time="2025-02-14T09:03:22.910981338Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 14 09:03:22.912043 containerd[1437]: time="2025-02-14T09:03:22.911981098Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:22.914406 containerd[1437]: time="2025-02-14T09:03:22.914366272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:22.915189 containerd[1437]: time="2025-02-14T09:03:22.915027566Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.330205135s" Feb 14 09:03:22.915189 containerd[1437]: time="2025-02-14T09:03:22.915073480Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 14 09:03:22.932879 containerd[1437]: time="2025-02-14T09:03:22.932776735Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 14 09:03:23.470610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount573623193.mount: Deactivated successfully. Feb 14 09:03:24.306267 containerd[1437]: time="2025-02-14T09:03:24.306109630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:24.307123 containerd[1437]: time="2025-02-14T09:03:24.307074236Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 14 09:03:24.307777 containerd[1437]: time="2025-02-14T09:03:24.307722418Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:24.311012 containerd[1437]: time="2025-02-14T09:03:24.310966879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:24.312041 containerd[1437]: time="2025-02-14T09:03:24.312012353Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.37919562s" Feb 14 09:03:24.312090 containerd[1437]: time="2025-02-14T09:03:24.312048604Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 14 09:03:24.330819 containerd[1437]: time="2025-02-14T09:03:24.330767829Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 14 09:03:24.772636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3413446541.mount: Deactivated successfully. Feb 14 09:03:24.777462 containerd[1437]: time="2025-02-14T09:03:24.777421904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:24.778197 containerd[1437]: time="2025-02-14T09:03:24.778163066Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 14 09:03:24.778894 containerd[1437]: time="2025-02-14T09:03:24.778839414Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:24.781628 containerd[1437]: time="2025-02-14T09:03:24.781503174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:24.782443 containerd[1437]: time="2025-02-14T09:03:24.782051287Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 451.243463ms" Feb 14 09:03:24.782443 containerd[1437]: time="2025-02-14T09:03:24.782079253Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 14 09:03:24.799993 containerd[1437]: time="2025-02-14T09:03:24.799948800Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 14 09:03:25.291036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1218069407.mount: Deactivated successfully. Feb 14 09:03:27.771970 containerd[1437]: time="2025-02-14T09:03:27.771912176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:27.772806 containerd[1437]: time="2025-02-14T09:03:27.772759014Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 14 09:03:27.773675 containerd[1437]: time="2025-02-14T09:03:27.773611356Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:27.777577 containerd[1437]: time="2025-02-14T09:03:27.777529850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:27.778655 containerd[1437]: time="2025-02-14T09:03:27.778624248Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.978638945s" Feb 14 09:03:27.778719 containerd[1437]: time="2025-02-14T09:03:27.778660270Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 14 09:03:31.434270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 14 09:03:31.443841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 09:03:31.533387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 09:03:31.537908 (kubelet)[2052]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 09:03:31.575437 kubelet[2052]: E0214 09:03:31.575390 2052 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 09:03:31.578206 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 09:03:31.578349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 09:03:32.910311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 09:03:32.921160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 09:03:32.937256 systemd[1]: Reloading requested from client PID 2067 ('systemctl') (unit session-5.scope)... Feb 14 09:03:32.937273 systemd[1]: Reloading... Feb 14 09:03:33.006639 zram_generator::config[2107]: No configuration found. Feb 14 09:03:33.111679 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 09:03:33.164243 systemd[1]: Reloading finished in 226 ms. Feb 14 09:03:33.205434 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 09:03:33.207781 systemd[1]: kubelet.service: Deactivated successfully. Feb 14 09:03:33.207960 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 09:03:33.209315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 09:03:33.304222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 09:03:33.308334 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 14 09:03:33.345486 kubelet[2153]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 09:03:33.345486 kubelet[2153]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 14 09:03:33.345486 kubelet[2153]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 09:03:33.346355 kubelet[2153]: I0214 09:03:33.346299 2153 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 14 09:03:34.242995 kubelet[2153]: I0214 09:03:34.242947 2153 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 14 09:03:34.242995 kubelet[2153]: I0214 09:03:34.242980 2153 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 14 09:03:34.243191 kubelet[2153]: I0214 09:03:34.243175 2153 server.go:927] "Client rotation is on, will bootstrap in background" Feb 14 09:03:34.295298 kubelet[2153]: I0214 09:03:34.295254 2153 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 14 09:03:34.295670 kubelet[2153]: E0214 09:03:34.295546 2153 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.7:6443: connect: connection refused Feb 14 09:03:34.304221 kubelet[2153]: I0214 09:03:34.304193 2153 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 14 09:03:34.305537 kubelet[2153]: I0214 09:03:34.304760 2153 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 14 09:03:34.305537 kubelet[2153]: I0214 09:03:34.304805 2153 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 14 09:03:34.305537 kubelet[2153]: I0214 09:03:34.305042 2153 topology_manager.go:138] "Creating topology manager with none policy" Feb 14 09:03:34.305537 kubelet[2153]: I0214 09:03:34.305051 2153 container_manager_linux.go:301] "Creating device plugin manager" Feb 14 09:03:34.305537 kubelet[2153]: I0214 09:03:34.305285 2153 state_mem.go:36] "Initialized new in-memory state store" Feb 14 09:03:34.306368 kubelet[2153]: I0214 09:03:34.306348 2153 kubelet.go:400] "Attempting to sync node with API server" Feb 14 09:03:34.306448 kubelet[2153]: I0214 09:03:34.306437 2153 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 14 09:03:34.306806 kubelet[2153]: I0214 09:03:34.306795 2153 kubelet.go:312] "Adding apiserver pod source" Feb 14 09:03:34.306994 kubelet[2153]: I0214 09:03:34.306984 2153 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 14 09:03:34.307140 kubelet[2153]: W0214 09:03:34.307084 2153 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 14 09:03:34.307180 kubelet[2153]: E0214 09:03:34.307150 2153 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 14 09:03:34.307668 kubelet[2153]: W0214 09:03:34.307553 2153 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 14 09:03:34.307668 kubelet[2153]: E0214 09:03:34.307615 2153 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 14 09:03:34.308221 kubelet[2153]: I0214 09:03:34.308199 2153 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 14 09:03:34.308575 kubelet[2153]: I0214 09:03:34.308561 2153 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 14 09:03:34.308693 kubelet[2153]: W0214 09:03:34.308681 2153 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 14 09:03:34.309493 kubelet[2153]: I0214 09:03:34.309470 2153 server.go:1264] "Started kubelet" Feb 14 09:03:34.311973 kubelet[2153]: I0214 09:03:34.311949 2153 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 14 09:03:34.318528 kubelet[2153]: I0214 09:03:34.318024 2153 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 14 09:03:34.318528 kubelet[2153]: I0214 09:03:34.318017 2153 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 14 09:03:34.318528 kubelet[2153]: E0214 09:03:34.318025 2153 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182407b1563f3b22 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-14 09:03:34.309444386 +0000 UTC m=+0.997732204,LastTimestamp:2025-02-14 09:03:34.309444386 +0000 UTC m=+0.997732204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 14 09:03:34.318528 kubelet[2153]: I0214 09:03:34.318302 2153 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 14 09:03:34.318789 kubelet[2153]: E0214 09:03:34.318736 2153 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 14 09:03:34.318855 kubelet[2153]: I0214 09:03:34.318840 2153 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 14 09:03:34.319181 kubelet[2153]: I0214 09:03:34.318941 2153 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 14 09:03:34.319181 kubelet[2153]: I0214 09:03:34.318995 2153 reconciler.go:26] "Reconciler: start to sync state" Feb 14 09:03:34.319181 kubelet[2153]: E0214 09:03:34.318989 2153 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 14 09:03:34.319275 kubelet[2153]: W0214 09:03:34.319246 2153 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 14 09:03:34.319298 kubelet[2153]: E0214 09:03:34.319284 2153 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 14 09:03:34.319579 kubelet[2153]: E0214 09:03:34.319511 2153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="200ms" Feb 14 09:03:34.319816 kubelet[2153]: I0214 09:03:34.319793 2153 factory.go:221] Registration of the systemd container factory successfully Feb 14 09:03:34.319927 kubelet[2153]: I0214 09:03:34.319870 2153 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 14 09:03:34.321124 kubelet[2153]: I0214 09:03:34.321015 2153 factory.go:221] Registration of the containerd container factory successfully Feb 14 09:03:34.322412 kubelet[2153]: I0214 09:03:34.322272 2153 server.go:455] "Adding debug handlers to kubelet server" Feb 14 09:03:34.330979 kubelet[2153]: I0214 09:03:34.330943 2153 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 14 09:03:34.332127 kubelet[2153]: I0214 09:03:34.332084 2153 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 14 09:03:34.332263 kubelet[2153]: I0214 09:03:34.332252 2153 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 14 09:03:34.332294 kubelet[2153]: I0214 09:03:34.332271 2153 kubelet.go:2337] "Starting kubelet main sync loop" Feb 14 09:03:34.332321 kubelet[2153]: E0214 09:03:34.332310 2153 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 14 09:03:34.333627 kubelet[2153]: W0214 09:03:34.333245 2153 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 14 09:03:34.333627 kubelet[2153]: E0214 09:03:34.333298 2153 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 14 09:03:34.335548 kubelet[2153]: I0214 09:03:34.335515 2153 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 14 09:03:34.336018 kubelet[2153]: I0214 09:03:34.336001 2153 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 14 09:03:34.336083 kubelet[2153]: I0214 09:03:34.336075 2153 state_mem.go:36] "Initialized new in-memory state store" Feb 14 09:03:34.401713 kubelet[2153]: I0214 09:03:34.401670 2153 policy_none.go:49] "None policy: Start" Feb 14 09:03:34.402920 kubelet[2153]: I0214 09:03:34.402465 2153 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 14 09:03:34.402920 kubelet[2153]: I0214 09:03:34.402489 2153 state_mem.go:35] "Initializing new in-memory state store" Feb 14 09:03:34.413996 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 14 09:03:34.420138 kubelet[2153]: I0214 09:03:34.420099 2153 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 14 09:03:34.420521 kubelet[2153]: E0214 09:03:34.420479 2153 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Feb 14 09:03:34.428587 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 14 09:03:34.431614 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 14 09:03:34.432482 kubelet[2153]: E0214 09:03:34.432458 2153 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 14 09:03:34.441461 kubelet[2153]: I0214 09:03:34.441426 2153 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 14 09:03:34.441907 kubelet[2153]: I0214 09:03:34.441822 2153 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 14 09:03:34.442016 kubelet[2153]: I0214 09:03:34.441979 2153 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 14 09:03:34.445946 kubelet[2153]: E0214 09:03:34.445918 2153 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 14 09:03:34.520617 kubelet[2153]: E0214 09:03:34.520502 2153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="400ms" Feb 14 09:03:34.621733 kubelet[2153]: I0214 09:03:34.621692 2153 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 14 09:03:34.622109 kubelet[2153]: E0214 09:03:34.622065 2153 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Feb 14 09:03:34.633289 kubelet[2153]: I0214 09:03:34.633256 2153 topology_manager.go:215] "Topology Admit Handler" podUID="cd13ca45f6753a74635cd09ff64fb377" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 14 09:03:34.634365 kubelet[2153]: I0214 09:03:34.634335 2153 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 14 09:03:34.635375 kubelet[2153]: I0214 09:03:34.635332 2153 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 14 09:03:34.641417 systemd[1]: Created slice kubepods-burstable-podcd13ca45f6753a74635cd09ff64fb377.slice - libcontainer container kubepods-burstable-podcd13ca45f6753a74635cd09ff64fb377.slice. Feb 14 09:03:34.662482 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 14 09:03:34.666348 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 14 09:03:34.720975 kubelet[2153]: I0214 09:03:34.720936 2153 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 14 09:03:34.720975 kubelet[2153]: I0214 09:03:34.720986 2153 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 14 09:03:34.721128 kubelet[2153]: I0214 09:03:34.721006 2153 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd13ca45f6753a74635cd09ff64fb377-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cd13ca45f6753a74635cd09ff64fb377\") " pod="kube-system/kube-apiserver-localhost" Feb 14 09:03:34.721128 kubelet[2153]: I0214 09:03:34.721027 2153 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 14 09:03:34.721128 kubelet[2153]: I0214 09:03:34.721044 2153 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 14 09:03:34.721128 kubelet[2153]: I0214 09:03:34.721059 2153 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 14 09:03:34.721128 kubelet[2153]: I0214 09:03:34.721076 2153 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 14 09:03:34.721245 kubelet[2153]: I0214 09:03:34.721092 2153 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd13ca45f6753a74635cd09ff64fb377-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd13ca45f6753a74635cd09ff64fb377\") " pod="kube-system/kube-apiserver-localhost" Feb 14 09:03:34.721245 kubelet[2153]: I0214 09:03:34.721107 2153 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd13ca45f6753a74635cd09ff64fb377-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd13ca45f6753a74635cd09ff64fb377\") " pod="kube-system/kube-apiserver-localhost" Feb 14 09:03:34.921167 kubelet[2153]: E0214 09:03:34.921031 2153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="800ms" Feb 14 09:03:34.960367 kubelet[2153]: E0214 09:03:34.960330 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:34.961176 containerd[1437]: time="2025-02-14T09:03:34.961052274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cd13ca45f6753a74635cd09ff64fb377,Namespace:kube-system,Attempt:0,}" Feb 14 09:03:34.964572 kubelet[2153]: E0214 09:03:34.964551 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:34.964976 containerd[1437]: time="2025-02-14T09:03:34.964931423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 14 09:03:34.968458 kubelet[2153]: E0214 09:03:34.968417 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:34.968834 containerd[1437]: time="2025-02-14T09:03:34.968797506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 14 09:03:35.023617 kubelet[2153]: I0214 09:03:35.023394 2153 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 14 09:03:35.023798 kubelet[2153]: E0214 09:03:35.023767 2153 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Feb 14 09:03:35.265149 kubelet[2153]: W0214 09:03:35.265011 2153 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 14 09:03:35.265149 kubelet[2153]: E0214 09:03:35.265079 2153 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 14 09:03:35.466261 kubelet[2153]: W0214 09:03:35.466159 2153 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 14 09:03:35.466261 kubelet[2153]: E0214 09:03:35.466235 2153 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 14 09:03:35.516348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2654305909.mount: Deactivated successfully. Feb 14 09:03:35.521239 containerd[1437]: time="2025-02-14T09:03:35.520887521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 09:03:35.522096 containerd[1437]: time="2025-02-14T09:03:35.522070448Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 14 09:03:35.524063 containerd[1437]: time="2025-02-14T09:03:35.524011183Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 09:03:35.526249 containerd[1437]: time="2025-02-14T09:03:35.526016297Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 09:03:35.526249 containerd[1437]: time="2025-02-14T09:03:35.526101616Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 14 09:03:35.527056 containerd[1437]: time="2025-02-14T09:03:35.526985825Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 09:03:35.527648 containerd[1437]: time="2025-02-14T09:03:35.527620947Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 14 09:03:35.528276 containerd[1437]: time="2025-02-14T09:03:35.528241683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 09:03:35.529257 containerd[1437]: time="2025-02-14T09:03:35.529186595Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 568.048731ms" Feb 14 09:03:35.532719 containerd[1437]: time="2025-02-14T09:03:35.532675673Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 563.815352ms" Feb 14 09:03:35.534457 containerd[1437]: time="2025-02-14T09:03:35.534313972Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 569.310026ms" Feb 14 09:03:35.613898 kubelet[2153]: W0214 09:03:35.613832 2153 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 14 09:03:35.613898 kubelet[2153]: E0214 09:03:35.613881 2153 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 14 09:03:35.663073 containerd[1437]: time="2025-02-14T09:03:35.662982619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 09:03:35.663073 containerd[1437]: time="2025-02-14T09:03:35.663046879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 09:03:35.663252 containerd[1437]: time="2025-02-14T09:03:35.663069098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 09:03:35.663252 containerd[1437]: time="2025-02-14T09:03:35.663168364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 09:03:35.664419 containerd[1437]: time="2025-02-14T09:03:35.664309291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 09:03:35.664419 containerd[1437]: time="2025-02-14T09:03:35.664358405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 09:03:35.664419 containerd[1437]: time="2025-02-14T09:03:35.664373151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 09:03:35.664764 containerd[1437]: time="2025-02-14T09:03:35.664709914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 09:03:35.667390 containerd[1437]: time="2025-02-14T09:03:35.665770277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 09:03:35.667390 containerd[1437]: time="2025-02-14T09:03:35.666300338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 09:03:35.667390 containerd[1437]: time="2025-02-14T09:03:35.666347454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 09:03:35.667390 containerd[1437]: time="2025-02-14T09:03:35.666502788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 09:03:35.682744 systemd[1]: Started cri-containerd-2676d80b03d6c75ad218a9d2df593d7c711108035f07a4556c1ead34299f5697.scope - libcontainer container 2676d80b03d6c75ad218a9d2df593d7c711108035f07a4556c1ead34299f5697. Feb 14 09:03:35.683919 systemd[1]: Started cri-containerd-47b133867b39fe06b86ef51fe20bd2069c9799550e02903a286745c07a3c978a.scope - libcontainer container 47b133867b39fe06b86ef51fe20bd2069c9799550e02903a286745c07a3c978a. Feb 14 09:03:35.687577 systemd[1]: Started cri-containerd-b7fee312b8058c0035eaf4b63bbc8373c056a2252180292cde066db6d1bf5879.scope - libcontainer container b7fee312b8058c0035eaf4b63bbc8373c056a2252180292cde066db6d1bf5879. Feb 14 09:03:35.718237 containerd[1437]: time="2025-02-14T09:03:35.718107046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"47b133867b39fe06b86ef51fe20bd2069c9799550e02903a286745c07a3c978a\"" Feb 14 09:03:35.718626 containerd[1437]: time="2025-02-14T09:03:35.718604818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"2676d80b03d6c75ad218a9d2df593d7c711108035f07a4556c1ead34299f5697\"" Feb 14 09:03:35.722158 kubelet[2153]: E0214 09:03:35.722100 2153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="1.6s" Feb 14 09:03:35.723653 containerd[1437]: time="2025-02-14T09:03:35.723311431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cd13ca45f6753a74635cd09ff64fb377,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7fee312b8058c0035eaf4b63bbc8373c056a2252180292cde066db6d1bf5879\"" Feb 14 09:03:35.723730 kubelet[2153]: E0214 09:03:35.723424 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:35.723730 kubelet[2153]: E0214 09:03:35.723676 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:35.724009 kubelet[2153]: E0214 09:03:35.723977 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:35.726484 containerd[1437]: time="2025-02-14T09:03:35.726450518Z" level=info msg="CreateContainer within sandbox \"2676d80b03d6c75ad218a9d2df593d7c711108035f07a4556c1ead34299f5697\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 14 09:03:35.726797 containerd[1437]: time="2025-02-14T09:03:35.726691451Z" level=info msg="CreateContainer within sandbox \"47b133867b39fe06b86ef51fe20bd2069c9799550e02903a286745c07a3c978a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 14 09:03:35.726797 containerd[1437]: time="2025-02-14T09:03:35.726510262Z" level=info msg="CreateContainer within sandbox \"b7fee312b8058c0035eaf4b63bbc8373c056a2252180292cde066db6d1bf5879\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 14 09:03:35.741011 containerd[1437]: time="2025-02-14T09:03:35.740968142Z" level=info msg="CreateContainer within sandbox \"2676d80b03d6c75ad218a9d2df593d7c711108035f07a4556c1ead34299f5697\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6ecfb1c5285b3d94ba37a1cfb1ba85fc8b3eff64f79e47ae0a8fe96a3f5847c5\"" Feb 14 09:03:35.742331 containerd[1437]: time="2025-02-14T09:03:35.741670481Z" level=info msg="StartContainer for \"6ecfb1c5285b3d94ba37a1cfb1ba85fc8b3eff64f79e47ae0a8fe96a3f5847c5\"" Feb 14 09:03:35.750992 containerd[1437]: time="2025-02-14T09:03:35.750914706Z" level=info msg="CreateContainer within sandbox \"47b133867b39fe06b86ef51fe20bd2069c9799550e02903a286745c07a3c978a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"47f5bf68eb9bcff3499677eee2015b68069ef3282df37205cce801a82b709017\"" Feb 14 09:03:35.751373 containerd[1437]: time="2025-02-14T09:03:35.751349097Z" level=info msg="StartContainer for \"47f5bf68eb9bcff3499677eee2015b68069ef3282df37205cce801a82b709017\"" Feb 14 09:03:35.751833 containerd[1437]: time="2025-02-14T09:03:35.751742087Z" level=info msg="CreateContainer within sandbox \"b7fee312b8058c0035eaf4b63bbc8373c056a2252180292cde066db6d1bf5879\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e788f98f9160ac96057efd671cc1b42d7732b8f3a8d6a122ca9aa61f9f028ec7\"" Feb 14 09:03:35.752113 containerd[1437]: time="2025-02-14T09:03:35.752087882Z" level=info msg="StartContainer for \"e788f98f9160ac96057efd671cc1b42d7732b8f3a8d6a122ca9aa61f9f028ec7\"" Feb 14 09:03:35.767179 systemd[1]: Started cri-containerd-6ecfb1c5285b3d94ba37a1cfb1ba85fc8b3eff64f79e47ae0a8fe96a3f5847c5.scope - libcontainer container 6ecfb1c5285b3d94ba37a1cfb1ba85fc8b3eff64f79e47ae0a8fe96a3f5847c5. Feb 14 09:03:35.782777 systemd[1]: Started cri-containerd-47f5bf68eb9bcff3499677eee2015b68069ef3282df37205cce801a82b709017.scope - libcontainer container 47f5bf68eb9bcff3499677eee2015b68069ef3282df37205cce801a82b709017. Feb 14 09:03:35.785622 systemd[1]: Started cri-containerd-e788f98f9160ac96057efd671cc1b42d7732b8f3a8d6a122ca9aa61f9f028ec7.scope - libcontainer container e788f98f9160ac96057efd671cc1b42d7732b8f3a8d6a122ca9aa61f9f028ec7. Feb 14 09:03:35.820619 containerd[1437]: time="2025-02-14T09:03:35.820124044Z" level=info msg="StartContainer for \"6ecfb1c5285b3d94ba37a1cfb1ba85fc8b3eff64f79e47ae0a8fe96a3f5847c5\" returns successfully" Feb 14 09:03:35.827505 kubelet[2153]: I0214 09:03:35.827466 2153 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 14 09:03:35.828030 kubelet[2153]: E0214 09:03:35.828002 2153 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Feb 14 09:03:35.864230 containerd[1437]: time="2025-02-14T09:03:35.864185557Z" level=info msg="StartContainer for \"e788f98f9160ac96057efd671cc1b42d7732b8f3a8d6a122ca9aa61f9f028ec7\" returns successfully" Feb 14 09:03:35.864318 containerd[1437]: time="2025-02-14T09:03:35.864294335Z" level=info msg="StartContainer for \"47f5bf68eb9bcff3499677eee2015b68069ef3282df37205cce801a82b709017\" returns successfully" Feb 14 09:03:35.877362 kubelet[2153]: W0214 09:03:35.877278 2153 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 14 09:03:35.877362 kubelet[2153]: E0214 09:03:35.877341 2153 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 14 09:03:36.342778 kubelet[2153]: E0214 09:03:36.342742 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:36.347017 kubelet[2153]: E0214 09:03:36.346987 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:36.347896 kubelet[2153]: E0214 09:03:36.347872 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:37.348603 kubelet[2153]: E0214 09:03:37.348559 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:37.429237 kubelet[2153]: I0214 09:03:37.429194 2153 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 14 09:03:38.146959 kubelet[2153]: E0214 09:03:38.146904 2153 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 14 09:03:38.297025 kubelet[2153]: I0214 09:03:38.296969 2153 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 14 09:03:38.306010 kubelet[2153]: E0214 09:03:38.305507 2153 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 14 09:03:38.406686 kubelet[2153]: E0214 09:03:38.406568 2153 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 14 09:03:38.507008 kubelet[2153]: E0214 09:03:38.506975 2153 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 14 09:03:38.986242 kubelet[2153]: E0214 09:03:38.986202 2153 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 14 09:03:38.986669 kubelet[2153]: E0214 09:03:38.986649 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:39.310787 kubelet[2153]: I0214 09:03:39.310674 2153 apiserver.go:52] "Watching apiserver" Feb 14 09:03:39.319369 kubelet[2153]: I0214 09:03:39.319289 2153 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 14 09:03:40.267756 systemd[1]: Reloading requested from client PID 2429 ('systemctl') (unit session-5.scope)... Feb 14 09:03:40.267772 systemd[1]: Reloading... Feb 14 09:03:40.331647 zram_generator::config[2468]: No configuration found. Feb 14 09:03:40.420492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 09:03:40.487921 systemd[1]: Reloading finished in 219 ms. Feb 14 09:03:40.521884 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 09:03:40.529460 systemd[1]: kubelet.service: Deactivated successfully. Feb 14 09:03:40.529741 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 09:03:40.529804 systemd[1]: kubelet.service: Consumed 1.411s CPU time, 118.4M memory peak, 0B memory swap peak. Feb 14 09:03:40.537128 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 09:03:40.626552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 09:03:40.631084 (kubelet)[2510]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 14 09:03:40.684330 kubelet[2510]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 09:03:40.684330 kubelet[2510]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 14 09:03:40.684330 kubelet[2510]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 09:03:40.684725 kubelet[2510]: I0214 09:03:40.684366 2510 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 14 09:03:40.697988 kubelet[2510]: I0214 09:03:40.697938 2510 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 14 09:03:40.697988 kubelet[2510]: I0214 09:03:40.697970 2510 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 14 09:03:40.698189 kubelet[2510]: I0214 09:03:40.698172 2510 server.go:927] "Client rotation is on, will bootstrap in background" Feb 14 09:03:40.701227 kubelet[2510]: I0214 09:03:40.701192 2510 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 14 09:03:40.702584 kubelet[2510]: I0214 09:03:40.702469 2510 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 14 09:03:40.709283 kubelet[2510]: I0214 09:03:40.709245 2510 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 14 09:03:40.709936 kubelet[2510]: I0214 09:03:40.709888 2510 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 14 09:03:40.710298 kubelet[2510]: I0214 09:03:40.709931 2510 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 14 09:03:40.710384 kubelet[2510]: I0214 09:03:40.710309 2510 topology_manager.go:138] "Creating topology manager with none policy" Feb 14 09:03:40.710384 kubelet[2510]: I0214 09:03:40.710322 2510 container_manager_linux.go:301] "Creating device plugin manager" Feb 14 09:03:40.710384 kubelet[2510]: I0214 09:03:40.710362 2510 state_mem.go:36] "Initialized new in-memory state store" Feb 14 09:03:40.710473 kubelet[2510]: I0214 09:03:40.710459 2510 kubelet.go:400] "Attempting to sync node with API server" Feb 14 09:03:40.710473 kubelet[2510]: I0214 09:03:40.710471 2510 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 14 09:03:40.710515 kubelet[2510]: I0214 09:03:40.710507 2510 kubelet.go:312] "Adding apiserver pod source" Feb 14 09:03:40.710537 kubelet[2510]: I0214 09:03:40.710520 2510 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 14 09:03:40.712162 kubelet[2510]: I0214 09:03:40.712042 2510 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 14 09:03:40.712344 kubelet[2510]: I0214 09:03:40.712326 2510 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 14 09:03:40.712829 kubelet[2510]: I0214 09:03:40.712808 2510 server.go:1264] "Started kubelet" Feb 14 09:03:40.716985 kubelet[2510]: I0214 09:03:40.713429 2510 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 14 09:03:40.716985 kubelet[2510]: I0214 09:03:40.713656 2510 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 14 09:03:40.716985 kubelet[2510]: I0214 09:03:40.713690 2510 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 14 09:03:40.716985 kubelet[2510]: I0214 09:03:40.714475 2510 server.go:455] "Adding debug handlers to kubelet server" Feb 14 09:03:40.716985 kubelet[2510]: I0214 09:03:40.714476 2510 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 14 09:03:40.716985 kubelet[2510]: I0214 09:03:40.715266 2510 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 14 09:03:40.721175 kubelet[2510]: I0214 09:03:40.715344 2510 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 14 09:03:40.721175 kubelet[2510]: I0214 09:03:40.719172 2510 reconciler.go:26] "Reconciler: start to sync state" Feb 14 09:03:40.728343 kubelet[2510]: E0214 09:03:40.728304 2510 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 14 09:03:40.733453 kubelet[2510]: I0214 09:03:40.733414 2510 factory.go:221] Registration of the systemd container factory successfully Feb 14 09:03:40.733538 kubelet[2510]: I0214 09:03:40.733524 2510 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 14 09:03:40.734944 kubelet[2510]: I0214 09:03:40.734884 2510 factory.go:221] Registration of the containerd container factory successfully Feb 14 09:03:40.737118 kubelet[2510]: I0214 09:03:40.737067 2510 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 14 09:03:40.738042 kubelet[2510]: I0214 09:03:40.738018 2510 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 14 09:03:40.738075 kubelet[2510]: I0214 09:03:40.738056 2510 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 14 09:03:40.738103 kubelet[2510]: I0214 09:03:40.738076 2510 kubelet.go:2337] "Starting kubelet main sync loop" Feb 14 09:03:40.738146 kubelet[2510]: E0214 09:03:40.738127 2510 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 14 09:03:40.767137 kubelet[2510]: I0214 09:03:40.767111 2510 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 14 09:03:40.767137 kubelet[2510]: I0214 09:03:40.767127 2510 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 14 09:03:40.767137 kubelet[2510]: I0214 09:03:40.767145 2510 state_mem.go:36] "Initialized new in-memory state store" Feb 14 09:03:40.767305 kubelet[2510]: I0214 09:03:40.767286 2510 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 14 09:03:40.767327 kubelet[2510]: I0214 09:03:40.767296 2510 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 14 09:03:40.767327 kubelet[2510]: I0214 09:03:40.767313 2510 policy_none.go:49] "None policy: Start" Feb 14 09:03:40.767891 kubelet[2510]: I0214 09:03:40.767874 2510 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 14 09:03:40.767946 kubelet[2510]: I0214 09:03:40.767897 2510 state_mem.go:35] "Initializing new in-memory state store" Feb 14 09:03:40.768065 kubelet[2510]: I0214 09:03:40.768051 2510 state_mem.go:75] "Updated machine memory state" Feb 14 09:03:40.772213 kubelet[2510]: I0214 09:03:40.772034 2510 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 14 09:03:40.772275 kubelet[2510]: I0214 09:03:40.772204 2510 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 14 09:03:40.772312 kubelet[2510]: I0214 09:03:40.772307 2510 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 14 09:03:40.819607 kubelet[2510]: I0214 09:03:40.819509 2510 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 14 09:03:40.824891 kubelet[2510]: I0214 09:03:40.824861 2510 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 14 09:03:40.824974 kubelet[2510]: I0214 09:03:40.824942 2510 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 14 09:03:40.839260 kubelet[2510]: I0214 09:03:40.839219 2510 topology_manager.go:215] "Topology Admit Handler" podUID="cd13ca45f6753a74635cd09ff64fb377" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 14 09:03:40.839365 kubelet[2510]: I0214 09:03:40.839325 2510 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 14 09:03:40.839394 kubelet[2510]: I0214 09:03:40.839364 2510 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 14 09:03:40.920268 kubelet[2510]: I0214 09:03:40.920225 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 14 09:03:40.920268 kubelet[2510]: I0214 09:03:40.920268 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 14 09:03:40.920479 kubelet[2510]: I0214 09:03:40.920292 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 14 09:03:40.920479 kubelet[2510]: I0214 09:03:40.920310 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd13ca45f6753a74635cd09ff64fb377-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd13ca45f6753a74635cd09ff64fb377\") " pod="kube-system/kube-apiserver-localhost" Feb 14 09:03:40.920479 kubelet[2510]: I0214 09:03:40.920328 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd13ca45f6753a74635cd09ff64fb377-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd13ca45f6753a74635cd09ff64fb377\") " pod="kube-system/kube-apiserver-localhost" Feb 14 09:03:40.920479 kubelet[2510]: I0214 09:03:40.920357 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd13ca45f6753a74635cd09ff64fb377-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cd13ca45f6753a74635cd09ff64fb377\") " pod="kube-system/kube-apiserver-localhost" Feb 14 09:03:40.920580 kubelet[2510]: I0214 09:03:40.920388 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 14 09:03:40.920580 kubelet[2510]: I0214 09:03:40.920545 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 14 09:03:40.920580 kubelet[2510]: I0214 09:03:40.920566 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 14 09:03:41.145182 kubelet[2510]: E0214 09:03:41.145043 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:41.145371 kubelet[2510]: E0214 09:03:41.145301 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:41.145801 kubelet[2510]: E0214 09:03:41.145745 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:41.711279 kubelet[2510]: I0214 09:03:41.711238 2510 apiserver.go:52] "Watching apiserver" Feb 14 09:03:41.719329 kubelet[2510]: I0214 09:03:41.719252 2510 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 14 09:03:41.753658 kubelet[2510]: E0214 09:03:41.753157 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:41.753658 kubelet[2510]: E0214 09:03:41.753164 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:41.753860 kubelet[2510]: E0214 09:03:41.753832 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:41.786259 kubelet[2510]: I0214 09:03:41.785523 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.7855064889999999 podStartE2EDuration="1.785506489s" podCreationTimestamp="2025-02-14 09:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 09:03:41.777512148 +0000 UTC m=+1.143156466" watchObservedRunningTime="2025-02-14 09:03:41.785506489 +0000 UTC m=+1.151150807" Feb 14 09:03:41.798234 kubelet[2510]: I0214 09:03:41.798178 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.798162059 podStartE2EDuration="1.798162059s" podCreationTimestamp="2025-02-14 09:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 09:03:41.785498249 +0000 UTC m=+1.151142567" watchObservedRunningTime="2025-02-14 09:03:41.798162059 +0000 UTC m=+1.163806377" Feb 14 09:03:41.810026 kubelet[2510]: I0214 09:03:41.809919 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.809904857 podStartE2EDuration="1.809904857s" podCreationTimestamp="2025-02-14 09:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 09:03:41.799201079 +0000 UTC m=+1.164845397" watchObservedRunningTime="2025-02-14 09:03:41.809904857 +0000 UTC m=+1.175549175" Feb 14 09:03:42.033636 sudo[1577]: pam_unix(sudo:session): session closed for user root Feb 14 09:03:42.035304 sshd[1574]: pam_unix(sshd:session): session closed for user core Feb 14 09:03:42.040450 systemd[1]: sshd@4-10.0.0.7:22-10.0.0.1:52216.service: Deactivated successfully. Feb 14 09:03:42.044313 systemd[1]: session-5.scope: Deactivated successfully. Feb 14 09:03:42.044457 systemd[1]: session-5.scope: Consumed 6.534s CPU time, 191.2M memory peak, 0B memory swap peak. Feb 14 09:03:42.045201 systemd-logind[1422]: Session 5 logged out. Waiting for processes to exit. Feb 14 09:03:42.046323 systemd-logind[1422]: Removed session 5. Feb 14 09:03:42.755122 kubelet[2510]: E0214 09:03:42.755093 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:44.005206 kubelet[2510]: E0214 09:03:44.005161 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:46.694290 kubelet[2510]: E0214 09:03:46.694249 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:46.760457 kubelet[2510]: E0214 09:03:46.760178 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:47.531917 kubelet[2510]: E0214 09:03:47.531826 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:47.761736 kubelet[2510]: E0214 09:03:47.761703 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:48.763271 kubelet[2510]: E0214 09:03:48.763198 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:54.014486 kubelet[2510]: E0214 09:03:54.013213 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:54.079286 update_engine[1432]: I20250214 09:03:54.079211 1432 update_attempter.cc:509] Updating boot flags... Feb 14 09:03:54.097656 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2586) Feb 14 09:03:54.125797 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2589) Feb 14 09:03:54.160627 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2589) Feb 14 09:03:54.630631 kubelet[2510]: I0214 09:03:54.628674 2510 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 14 09:03:54.630768 containerd[1437]: time="2025-02-14T09:03:54.629054258Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 14 09:03:54.631034 kubelet[2510]: I0214 09:03:54.630754 2510 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 14 09:03:55.630139 kubelet[2510]: I0214 09:03:55.630084 2510 topology_manager.go:215] "Topology Admit Handler" podUID="fd3e2a76-653c-4da5-b8a2-006f1d339122" podNamespace="kube-system" podName="kube-proxy-2bdtk" Feb 14 09:03:55.641313 kubelet[2510]: I0214 09:03:55.638497 2510 topology_manager.go:215] "Topology Admit Handler" podUID="888877fb-0457-4e08-b09b-5bffc65a97ca" podNamespace="kube-flannel" podName="kube-flannel-ds-qk2sh" Feb 14 09:03:55.642389 systemd[1]: Created slice kubepods-besteffort-podfd3e2a76_653c_4da5_b8a2_006f1d339122.slice - libcontainer container kubepods-besteffort-podfd3e2a76_653c_4da5_b8a2_006f1d339122.slice. Feb 14 09:03:55.655318 systemd[1]: Created slice kubepods-burstable-pod888877fb_0457_4e08_b09b_5bffc65a97ca.slice - libcontainer container kubepods-burstable-pod888877fb_0457_4e08_b09b_5bffc65a97ca.slice. Feb 14 09:03:55.721966 kubelet[2510]: I0214 09:03:55.721917 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/888877fb-0457-4e08-b09b-5bffc65a97ca-run\") pod \"kube-flannel-ds-qk2sh\" (UID: \"888877fb-0457-4e08-b09b-5bffc65a97ca\") " pod="kube-flannel/kube-flannel-ds-qk2sh" Feb 14 09:03:55.721966 kubelet[2510]: I0214 09:03:55.721963 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/888877fb-0457-4e08-b09b-5bffc65a97ca-flannel-cfg\") pod \"kube-flannel-ds-qk2sh\" (UID: \"888877fb-0457-4e08-b09b-5bffc65a97ca\") " pod="kube-flannel/kube-flannel-ds-qk2sh" Feb 14 09:03:55.722143 kubelet[2510]: I0214 09:03:55.721981 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/888877fb-0457-4e08-b09b-5bffc65a97ca-xtables-lock\") pod \"kube-flannel-ds-qk2sh\" (UID: \"888877fb-0457-4e08-b09b-5bffc65a97ca\") " pod="kube-flannel/kube-flannel-ds-qk2sh" Feb 14 09:03:55.722143 kubelet[2510]: I0214 09:03:55.721999 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znk5d\" (UniqueName: \"kubernetes.io/projected/888877fb-0457-4e08-b09b-5bffc65a97ca-kube-api-access-znk5d\") pod \"kube-flannel-ds-qk2sh\" (UID: \"888877fb-0457-4e08-b09b-5bffc65a97ca\") " pod="kube-flannel/kube-flannel-ds-qk2sh" Feb 14 09:03:55.722143 kubelet[2510]: I0214 09:03:55.722015 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/888877fb-0457-4e08-b09b-5bffc65a97ca-cni-plugin\") pod \"kube-flannel-ds-qk2sh\" (UID: \"888877fb-0457-4e08-b09b-5bffc65a97ca\") " pod="kube-flannel/kube-flannel-ds-qk2sh" Feb 14 09:03:55.722143 kubelet[2510]: I0214 09:03:55.722031 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd3e2a76-653c-4da5-b8a2-006f1d339122-xtables-lock\") pod \"kube-proxy-2bdtk\" (UID: \"fd3e2a76-653c-4da5-b8a2-006f1d339122\") " pod="kube-system/kube-proxy-2bdtk" Feb 14 09:03:55.722143 kubelet[2510]: I0214 09:03:55.722053 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd3e2a76-653c-4da5-b8a2-006f1d339122-lib-modules\") pod \"kube-proxy-2bdtk\" (UID: \"fd3e2a76-653c-4da5-b8a2-006f1d339122\") " pod="kube-system/kube-proxy-2bdtk" Feb 14 09:03:55.722255 kubelet[2510]: I0214 09:03:55.722073 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fd3e2a76-653c-4da5-b8a2-006f1d339122-kube-proxy\") pod \"kube-proxy-2bdtk\" (UID: \"fd3e2a76-653c-4da5-b8a2-006f1d339122\") " pod="kube-system/kube-proxy-2bdtk" Feb 14 09:03:55.722255 kubelet[2510]: I0214 09:03:55.722090 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v98p9\" (UniqueName: \"kubernetes.io/projected/fd3e2a76-653c-4da5-b8a2-006f1d339122-kube-api-access-v98p9\") pod \"kube-proxy-2bdtk\" (UID: \"fd3e2a76-653c-4da5-b8a2-006f1d339122\") " pod="kube-system/kube-proxy-2bdtk" Feb 14 09:03:55.722255 kubelet[2510]: I0214 09:03:55.722104 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/888877fb-0457-4e08-b09b-5bffc65a97ca-cni\") pod \"kube-flannel-ds-qk2sh\" (UID: \"888877fb-0457-4e08-b09b-5bffc65a97ca\") " pod="kube-flannel/kube-flannel-ds-qk2sh" Feb 14 09:03:55.953099 kubelet[2510]: E0214 09:03:55.951811 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:55.953263 containerd[1437]: time="2025-02-14T09:03:55.952410866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2bdtk,Uid:fd3e2a76-653c-4da5-b8a2-006f1d339122,Namespace:kube-system,Attempt:0,}" Feb 14 09:03:55.961433 kubelet[2510]: E0214 09:03:55.961403 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:55.961828 containerd[1437]: time="2025-02-14T09:03:55.961793042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-qk2sh,Uid:888877fb-0457-4e08-b09b-5bffc65a97ca,Namespace:kube-flannel,Attempt:0,}" Feb 14 09:03:55.983250 containerd[1437]: time="2025-02-14T09:03:55.980389467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 09:03:55.983250 containerd[1437]: time="2025-02-14T09:03:55.980478429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 09:03:55.983250 containerd[1437]: time="2025-02-14T09:03:55.980494190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 09:03:55.983250 containerd[1437]: time="2025-02-14T09:03:55.980583712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 09:03:55.997150 containerd[1437]: time="2025-02-14T09:03:55.996876675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 09:03:55.997150 containerd[1437]: time="2025-02-14T09:03:55.996946237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 09:03:55.997150 containerd[1437]: time="2025-02-14T09:03:55.996979638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 09:03:55.997150 containerd[1437]: time="2025-02-14T09:03:55.997133162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 09:03:56.012782 systemd[1]: Started cri-containerd-57be8b1833169782a74bc9093f2f5e01e532b65dcb7bebe87619429a13acee3b.scope - libcontainer container 57be8b1833169782a74bc9093f2f5e01e532b65dcb7bebe87619429a13acee3b. Feb 14 09:03:56.019143 systemd[1]: Started cri-containerd-6e0662b3100d7ca408112c879d4691f2d46614eaa89ebed97c00a7d1c193aa5e.scope - libcontainer container 6e0662b3100d7ca408112c879d4691f2d46614eaa89ebed97c00a7d1c193aa5e. Feb 14 09:03:56.042364 containerd[1437]: time="2025-02-14T09:03:56.042300897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2bdtk,Uid:fd3e2a76-653c-4da5-b8a2-006f1d339122,Namespace:kube-system,Attempt:0,} returns sandbox id \"57be8b1833169782a74bc9093f2f5e01e532b65dcb7bebe87619429a13acee3b\"" Feb 14 09:03:56.043429 kubelet[2510]: E0214 09:03:56.043167 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:56.048039 containerd[1437]: time="2025-02-14T09:03:56.047998684Z" level=info msg="CreateContainer within sandbox \"57be8b1833169782a74bc9093f2f5e01e532b65dcb7bebe87619429a13acee3b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 14 09:03:56.058498 containerd[1437]: time="2025-02-14T09:03:56.058459075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-qk2sh,Uid:888877fb-0457-4e08-b09b-5bffc65a97ca,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"6e0662b3100d7ca408112c879d4691f2d46614eaa89ebed97c00a7d1c193aa5e\"" Feb 14 09:03:56.059424 kubelet[2510]: E0214 09:03:56.059394 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:56.062822 containerd[1437]: time="2025-02-14T09:03:56.062673344Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 14 09:03:56.065006 containerd[1437]: time="2025-02-14T09:03:56.064961844Z" level=info msg="CreateContainer within sandbox \"57be8b1833169782a74bc9093f2f5e01e532b65dcb7bebe87619429a13acee3b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a528f925a9d2f2b935c9ceef709c9dfd19134960577d98418c7b93b1ff9979e7\"" Feb 14 09:03:56.067543 containerd[1437]: time="2025-02-14T09:03:56.065720623Z" level=info msg="StartContainer for \"a528f925a9d2f2b935c9ceef709c9dfd19134960577d98418c7b93b1ff9979e7\"" Feb 14 09:03:56.090777 systemd[1]: Started cri-containerd-a528f925a9d2f2b935c9ceef709c9dfd19134960577d98418c7b93b1ff9979e7.scope - libcontainer container a528f925a9d2f2b935c9ceef709c9dfd19134960577d98418c7b93b1ff9979e7. Feb 14 09:03:56.114136 containerd[1437]: time="2025-02-14T09:03:56.114014633Z" level=info msg="StartContainer for \"a528f925a9d2f2b935c9ceef709c9dfd19134960577d98418c7b93b1ff9979e7\" returns successfully" Feb 14 09:03:56.775230 kubelet[2510]: E0214 09:03:56.774759 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:56.786145 kubelet[2510]: I0214 09:03:56.786091 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2bdtk" podStartSLOduration=1.7860662280000001 podStartE2EDuration="1.786066228s" podCreationTimestamp="2025-02-14 09:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 09:03:56.783927813 +0000 UTC m=+16.149572131" watchObservedRunningTime="2025-02-14 09:03:56.786066228 +0000 UTC m=+16.151710546" Feb 14 09:03:57.288257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3921898944.mount: Deactivated successfully. Feb 14 09:03:57.316640 containerd[1437]: time="2025-02-14T09:03:57.316577216Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:57.318076 containerd[1437]: time="2025-02-14T09:03:57.318048772Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 14 09:03:57.318815 containerd[1437]: time="2025-02-14T09:03:57.318786030Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:57.320904 containerd[1437]: time="2025-02-14T09:03:57.320869802Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:03:57.321688 containerd[1437]: time="2025-02-14T09:03:57.321659781Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.258938876s" Feb 14 09:03:57.321737 containerd[1437]: time="2025-02-14T09:03:57.321690902Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 14 09:03:57.323829 containerd[1437]: time="2025-02-14T09:03:57.323783074Z" level=info msg="CreateContainer within sandbox \"6e0662b3100d7ca408112c879d4691f2d46614eaa89ebed97c00a7d1c193aa5e\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 14 09:03:57.337095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2860133906.mount: Deactivated successfully. Feb 14 09:03:57.338647 containerd[1437]: time="2025-02-14T09:03:57.338586799Z" level=info msg="CreateContainer within sandbox \"6e0662b3100d7ca408112c879d4691f2d46614eaa89ebed97c00a7d1c193aa5e\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"6b934aad0dd01cbdcf42584a643c020c2e710ece5d90359d5cd576dad1d739eb\"" Feb 14 09:03:57.339261 containerd[1437]: time="2025-02-14T09:03:57.339228295Z" level=info msg="StartContainer for \"6b934aad0dd01cbdcf42584a643c020c2e710ece5d90359d5cd576dad1d739eb\"" Feb 14 09:03:57.374818 systemd[1]: Started cri-containerd-6b934aad0dd01cbdcf42584a643c020c2e710ece5d90359d5cd576dad1d739eb.scope - libcontainer container 6b934aad0dd01cbdcf42584a643c020c2e710ece5d90359d5cd576dad1d739eb. Feb 14 09:03:57.402826 systemd[1]: cri-containerd-6b934aad0dd01cbdcf42584a643c020c2e710ece5d90359d5cd576dad1d739eb.scope: Deactivated successfully. Feb 14 09:03:57.405435 containerd[1437]: time="2025-02-14T09:03:57.405395127Z" level=info msg="StartContainer for \"6b934aad0dd01cbdcf42584a643c020c2e710ece5d90359d5cd576dad1d739eb\" returns successfully" Feb 14 09:03:57.436838 containerd[1437]: time="2025-02-14T09:03:57.436781181Z" level=info msg="shim disconnected" id=6b934aad0dd01cbdcf42584a643c020c2e710ece5d90359d5cd576dad1d739eb namespace=k8s.io Feb 14 09:03:57.436838 containerd[1437]: time="2025-02-14T09:03:57.436833542Z" level=warning msg="cleaning up after shim disconnected" id=6b934aad0dd01cbdcf42584a643c020c2e710ece5d90359d5cd576dad1d739eb namespace=k8s.io Feb 14 09:03:57.436838 containerd[1437]: time="2025-02-14T09:03:57.436842022Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 09:03:57.778552 kubelet[2510]: E0214 09:03:57.778522 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:03:57.779893 containerd[1437]: time="2025-02-14T09:03:57.779822842Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 14 09:03:59.024804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1141027861.mount: Deactivated successfully. Feb 14 09:04:00.548228 containerd[1437]: time="2025-02-14T09:04:00.548179659Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:04:00.548971 containerd[1437]: time="2025-02-14T09:04:00.548924355Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874260" Feb 14 09:04:00.549522 containerd[1437]: time="2025-02-14T09:04:00.549495367Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:04:00.553758 containerd[1437]: time="2025-02-14T09:04:00.553729138Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 09:04:00.555455 containerd[1437]: time="2025-02-14T09:04:00.555174129Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 2.775309725s" Feb 14 09:04:00.555455 containerd[1437]: time="2025-02-14T09:04:00.555229090Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 14 09:04:00.561202 containerd[1437]: time="2025-02-14T09:04:00.561169297Z" level=info msg="CreateContainer within sandbox \"6e0662b3100d7ca408112c879d4691f2d46614eaa89ebed97c00a7d1c193aa5e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 14 09:04:00.571908 containerd[1437]: time="2025-02-14T09:04:00.571866887Z" level=info msg="CreateContainer within sandbox \"6e0662b3100d7ca408112c879d4691f2d46614eaa89ebed97c00a7d1c193aa5e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"300164aa8e48750b1fe7da0eefb407088e29e0e808ee8f52c59fb3171b9a84ca\"" Feb 14 09:04:00.572321 containerd[1437]: time="2025-02-14T09:04:00.572205374Z" level=info msg="StartContainer for \"300164aa8e48750b1fe7da0eefb407088e29e0e808ee8f52c59fb3171b9a84ca\"" Feb 14 09:04:00.595751 systemd[1]: Started cri-containerd-300164aa8e48750b1fe7da0eefb407088e29e0e808ee8f52c59fb3171b9a84ca.scope - libcontainer container 300164aa8e48750b1fe7da0eefb407088e29e0e808ee8f52c59fb3171b9a84ca. Feb 14 09:04:00.619059 containerd[1437]: time="2025-02-14T09:04:00.617333502Z" level=info msg="StartContainer for \"300164aa8e48750b1fe7da0eefb407088e29e0e808ee8f52c59fb3171b9a84ca\" returns successfully" Feb 14 09:04:00.619999 systemd[1]: cri-containerd-300164aa8e48750b1fe7da0eefb407088e29e0e808ee8f52c59fb3171b9a84ca.scope: Deactivated successfully. Feb 14 09:04:00.656664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-300164aa8e48750b1fe7da0eefb407088e29e0e808ee8f52c59fb3171b9a84ca-rootfs.mount: Deactivated successfully. Feb 14 09:04:00.666184 kubelet[2510]: I0214 09:04:00.666129 2510 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 14 09:04:00.695053 kubelet[2510]: I0214 09:04:00.694920 2510 topology_manager.go:215] "Topology Admit Handler" podUID="c7920f60-c74d-4d12-9072-391d1d65e582" podNamespace="kube-system" podName="coredns-7db6d8ff4d-46t8f" Feb 14 09:04:00.695315 kubelet[2510]: I0214 09:04:00.695279 2510 topology_manager.go:215] "Topology Admit Handler" podUID="8282477f-b3f4-48bc-bf85-ae1dbd480c66" podNamespace="kube-system" podName="coredns-7db6d8ff4d-g4nwv" Feb 14 09:04:00.708314 systemd[1]: Created slice kubepods-burstable-podc7920f60_c74d_4d12_9072_391d1d65e582.slice - libcontainer container kubepods-burstable-podc7920f60_c74d_4d12_9072_391d1d65e582.slice. Feb 14 09:04:00.717689 systemd[1]: Created slice kubepods-burstable-pod8282477f_b3f4_48bc_bf85_ae1dbd480c66.slice - libcontainer container kubepods-burstable-pod8282477f_b3f4_48bc_bf85_ae1dbd480c66.slice. Feb 14 09:04:00.753996 kubelet[2510]: I0214 09:04:00.753896 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsswm\" (UniqueName: \"kubernetes.io/projected/c7920f60-c74d-4d12-9072-391d1d65e582-kube-api-access-rsswm\") pod \"coredns-7db6d8ff4d-46t8f\" (UID: \"c7920f60-c74d-4d12-9072-391d1d65e582\") " pod="kube-system/coredns-7db6d8ff4d-46t8f" Feb 14 09:04:00.753996 kubelet[2510]: I0214 09:04:00.753950 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbvpw\" (UniqueName: \"kubernetes.io/projected/8282477f-b3f4-48bc-bf85-ae1dbd480c66-kube-api-access-qbvpw\") pod \"coredns-7db6d8ff4d-g4nwv\" (UID: \"8282477f-b3f4-48bc-bf85-ae1dbd480c66\") " pod="kube-system/coredns-7db6d8ff4d-g4nwv" Feb 14 09:04:00.753996 kubelet[2510]: I0214 09:04:00.753971 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8282477f-b3f4-48bc-bf85-ae1dbd480c66-config-volume\") pod \"coredns-7db6d8ff4d-g4nwv\" (UID: \"8282477f-b3f4-48bc-bf85-ae1dbd480c66\") " pod="kube-system/coredns-7db6d8ff4d-g4nwv" Feb 14 09:04:00.753996 kubelet[2510]: I0214 09:04:00.753996 2510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7920f60-c74d-4d12-9072-391d1d65e582-config-volume\") pod \"coredns-7db6d8ff4d-46t8f\" (UID: \"c7920f60-c74d-4d12-9072-391d1d65e582\") " pod="kube-system/coredns-7db6d8ff4d-46t8f" Feb 14 09:04:00.760648 containerd[1437]: time="2025-02-14T09:04:00.760433172Z" level=info msg="shim disconnected" id=300164aa8e48750b1fe7da0eefb407088e29e0e808ee8f52c59fb3171b9a84ca namespace=k8s.io Feb 14 09:04:00.760648 containerd[1437]: time="2025-02-14T09:04:00.760482933Z" level=warning msg="cleaning up after shim disconnected" id=300164aa8e48750b1fe7da0eefb407088e29e0e808ee8f52c59fb3171b9a84ca namespace=k8s.io Feb 14 09:04:00.760648 containerd[1437]: time="2025-02-14T09:04:00.760492813Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 09:04:00.786498 kubelet[2510]: E0214 09:04:00.785664 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:04:00.788493 containerd[1437]: time="2025-02-14T09:04:00.788396812Z" level=info msg="CreateContainer within sandbox \"6e0662b3100d7ca408112c879d4691f2d46614eaa89ebed97c00a7d1c193aa5e\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 14 09:04:00.808100 containerd[1437]: time="2025-02-14T09:04:00.808002112Z" level=info msg="CreateContainer within sandbox \"6e0662b3100d7ca408112c879d4691f2d46614eaa89ebed97c00a7d1c193aa5e\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"7fd7051cbca2413aa18921642245fbdc3b5574daec95ec1c9aea00be8c1269cd\"" Feb 14 09:04:00.809243 containerd[1437]: time="2025-02-14T09:04:00.808997654Z" level=info msg="StartContainer for \"7fd7051cbca2413aa18921642245fbdc3b5574daec95ec1c9aea00be8c1269cd\"" Feb 14 09:04:00.833742 systemd[1]: Started cri-containerd-7fd7051cbca2413aa18921642245fbdc3b5574daec95ec1c9aea00be8c1269cd.scope - libcontainer container 7fd7051cbca2413aa18921642245fbdc3b5574daec95ec1c9aea00be8c1269cd. Feb 14 09:04:00.856925 containerd[1437]: time="2025-02-14T09:04:00.856793959Z" level=info msg="StartContainer for \"7fd7051cbca2413aa18921642245fbdc3b5574daec95ec1c9aea00be8c1269cd\" returns successfully" Feb 14 09:04:01.013621 kubelet[2510]: E0214 09:04:01.013570 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:04:01.014624 containerd[1437]: time="2025-02-14T09:04:01.014207486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-46t8f,Uid:c7920f60-c74d-4d12-9072-391d1d65e582,Namespace:kube-system,Attempt:0,}" Feb 14 09:04:01.020391 kubelet[2510]: E0214 09:04:01.020268 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:04:01.020707 containerd[1437]: time="2025-02-14T09:04:01.020668299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g4nwv,Uid:8282477f-b3f4-48bc-bf85-ae1dbd480c66,Namespace:kube-system,Attempt:0,}" Feb 14 09:04:01.068961 containerd[1437]: time="2025-02-14T09:04:01.068829727Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-46t8f,Uid:c7920f60-c74d-4d12-9072-391d1d65e582,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e54091bd824183ee10187ce824dd7a281a6a994ab06a341e32fb0b92d683ec4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 14 09:04:01.069072 kubelet[2510]: E0214 09:04:01.069025 2510 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e54091bd824183ee10187ce824dd7a281a6a994ab06a341e32fb0b92d683ec4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 14 09:04:01.069116 kubelet[2510]: E0214 09:04:01.069097 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e54091bd824183ee10187ce824dd7a281a6a994ab06a341e32fb0b92d683ec4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-46t8f" Feb 14 09:04:01.069145 kubelet[2510]: E0214 09:04:01.069115 2510 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e54091bd824183ee10187ce824dd7a281a6a994ab06a341e32fb0b92d683ec4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-46t8f" Feb 14 09:04:01.069174 kubelet[2510]: E0214 09:04:01.069148 2510 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-46t8f_kube-system(c7920f60-c74d-4d12-9072-391d1d65e582)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-46t8f_kube-system(c7920f60-c74d-4d12-9072-391d1d65e582)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e54091bd824183ee10187ce824dd7a281a6a994ab06a341e32fb0b92d683ec4\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-46t8f" podUID="c7920f60-c74d-4d12-9072-391d1d65e582" Feb 14 09:04:01.076112 containerd[1437]: time="2025-02-14T09:04:01.076075155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g4nwv,Uid:8282477f-b3f4-48bc-bf85-ae1dbd480c66,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d970315871b6e04c9a06d1afc9d47715857e267c7708854c9c3844e9ecea1500\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 14 09:04:01.076271 kubelet[2510]: E0214 09:04:01.076242 2510 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d970315871b6e04c9a06d1afc9d47715857e267c7708854c9c3844e9ecea1500\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 14 09:04:01.076313 kubelet[2510]: E0214 09:04:01.076285 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d970315871b6e04c9a06d1afc9d47715857e267c7708854c9c3844e9ecea1500\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-g4nwv" Feb 14 09:04:01.076313 kubelet[2510]: E0214 09:04:01.076301 2510 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d970315871b6e04c9a06d1afc9d47715857e267c7708854c9c3844e9ecea1500\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-g4nwv" Feb 14 09:04:01.076367 kubelet[2510]: E0214 09:04:01.076333 2510 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-g4nwv_kube-system(8282477f-b3f4-48bc-bf85-ae1dbd480c66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-g4nwv_kube-system(8282477f-b3f4-48bc-bf85-ae1dbd480c66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d970315871b6e04c9a06d1afc9d47715857e267c7708854c9c3844e9ecea1500\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-g4nwv" podUID="8282477f-b3f4-48bc-bf85-ae1dbd480c66" Feb 14 09:04:01.788809 kubelet[2510]: E0214 09:04:01.788778 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:04:01.799462 kubelet[2510]: I0214 09:04:01.799394 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-qk2sh" podStartSLOduration=2.304480951 podStartE2EDuration="6.799378152s" podCreationTimestamp="2025-02-14 09:03:55 +0000 UTC" firstStartedPulling="2025-02-14 09:03:56.061536515 +0000 UTC m=+15.427180793" lastFinishedPulling="2025-02-14 09:04:00.556433676 +0000 UTC m=+19.922077994" observedRunningTime="2025-02-14 09:04:01.798543895 +0000 UTC m=+21.164188173" watchObservedRunningTime="2025-02-14 09:04:01.799378152 +0000 UTC m=+21.165022470" Feb 14 09:04:01.939775 systemd-networkd[1382]: flannel.1: Link UP Feb 14 09:04:01.939782 systemd-networkd[1382]: flannel.1: Gained carrier Feb 14 09:04:02.790252 kubelet[2510]: E0214 09:04:02.789907 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:04:03.397756 systemd-networkd[1382]: flannel.1: Gained IPv6LL Feb 14 09:04:06.933035 systemd[1]: Started sshd@5-10.0.0.7:22-10.0.0.1:55818.service - OpenSSH per-connection server daemon (10.0.0.1:55818). Feb 14 09:04:06.969683 sshd[3165]: Accepted publickey for core from 10.0.0.1 port 55818 ssh2: RSA SHA256:nkzhV86wH9QcDRurhp7rPRyA4ZaXT3UfdFDNqPx4HW0 Feb 14 09:04:06.971160 sshd[3165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 09:04:06.975193 systemd-logind[1422]: New session 6 of user core. Feb 14 09:04:06.980719 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 14 09:04:07.094067 sshd[3165]: pam_unix(sshd:session): session closed for user core Feb 14 09:04:07.097578 systemd[1]: sshd@5-10.0.0.7:22-10.0.0.1:55818.service: Deactivated successfully. Feb 14 09:04:07.099237 systemd[1]: session-6.scope: Deactivated successfully. Feb 14 09:04:07.100002 systemd-logind[1422]: Session 6 logged out. Waiting for processes to exit. Feb 14 09:04:07.101044 systemd-logind[1422]: Removed session 6. Feb 14 09:04:12.108183 systemd[1]: Started sshd@6-10.0.0.7:22-10.0.0.1:55826.service - OpenSSH per-connection server daemon (10.0.0.1:55826). Feb 14 09:04:12.143134 sshd[3222]: Accepted publickey for core from 10.0.0.1 port 55826 ssh2: RSA SHA256:nkzhV86wH9QcDRurhp7rPRyA4ZaXT3UfdFDNqPx4HW0 Feb 14 09:04:12.144353 sshd[3222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 09:04:12.147861 systemd-logind[1422]: New session 7 of user core. Feb 14 09:04:12.158760 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 14 09:04:12.265261 sshd[3222]: pam_unix(sshd:session): session closed for user core Feb 14 09:04:12.268713 systemd[1]: sshd@6-10.0.0.7:22-10.0.0.1:55826.service: Deactivated successfully. Feb 14 09:04:12.272101 systemd[1]: session-7.scope: Deactivated successfully. Feb 14 09:04:12.272757 systemd-logind[1422]: Session 7 logged out. Waiting for processes to exit. Feb 14 09:04:12.273637 systemd-logind[1422]: Removed session 7. Feb 14 09:04:12.740463 kubelet[2510]: E0214 09:04:12.739169 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:04:12.741032 containerd[1437]: time="2025-02-14T09:04:12.740999185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-46t8f,Uid:c7920f60-c74d-4d12-9072-391d1d65e582,Namespace:kube-system,Attempt:0,}" Feb 14 09:04:12.767368 systemd-networkd[1382]: cni0: Link UP Feb 14 09:04:12.767375 systemd-networkd[1382]: cni0: Gained carrier Feb 14 09:04:12.769356 systemd-networkd[1382]: cni0: Lost carrier Feb 14 09:04:12.772649 systemd-networkd[1382]: veth17afdc0d: Link UP Feb 14 09:04:12.775927 kernel: cni0: port 1(veth17afdc0d) entered blocking state Feb 14 09:04:12.776059 kernel: cni0: port 1(veth17afdc0d) entered disabled state Feb 14 09:04:12.776076 kernel: veth17afdc0d: entered allmulticast mode Feb 14 09:04:12.778648 kernel: veth17afdc0d: entered promiscuous mode Feb 14 09:04:12.778721 kernel: cni0: port 1(veth17afdc0d) entered blocking state Feb 14 09:04:12.780421 kernel: cni0: port 1(veth17afdc0d) entered forwarding state Feb 14 09:04:12.782670 kernel: cni0: port 1(veth17afdc0d) entered disabled state Feb 14 09:04:12.793144 kernel: cni0: port 1(veth17afdc0d) entered blocking state Feb 14 09:04:12.793255 kernel: cni0: port 1(veth17afdc0d) entered forwarding state Feb 14 09:04:12.793399 systemd-networkd[1382]: veth17afdc0d: Gained carrier Feb 14 09:04:12.793677 systemd-networkd[1382]: cni0: Gained carrier Feb 14 09:04:12.795283 containerd[1437]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000012938), "name":"cbr0", "type":"bridge"} Feb 14 09:04:12.795283 containerd[1437]: delegateAdd: netconf sent to delegate plugin: Feb 14 09:04:12.813562 containerd[1437]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-14T09:04:12.813423272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 09:04:12.813562 containerd[1437]: time="2025-02-14T09:04:12.813487352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 09:04:12.814023 containerd[1437]: time="2025-02-14T09:04:12.813529993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 09:04:12.814201 containerd[1437]: time="2025-02-14T09:04:12.814162201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 09:04:12.834809 systemd[1]: Started cri-containerd-f16b7f29bec1ec8bd9c679d2c187792f1268ee1f44be5e35f37df5446ac635eb.scope - libcontainer container f16b7f29bec1ec8bd9c679d2c187792f1268ee1f44be5e35f37df5446ac635eb. Feb 14 09:04:12.846636 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 14 09:04:12.863493 containerd[1437]: time="2025-02-14T09:04:12.863442259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-46t8f,Uid:c7920f60-c74d-4d12-9072-391d1d65e582,Namespace:kube-system,Attempt:0,} returns sandbox id \"f16b7f29bec1ec8bd9c679d2c187792f1268ee1f44be5e35f37df5446ac635eb\"" Feb 14 09:04:12.864372 kubelet[2510]: E0214 09:04:12.864316 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:04:12.867317 containerd[1437]: time="2025-02-14T09:04:12.867262750Z" level=info msg="CreateContainer within sandbox \"f16b7f29bec1ec8bd9c679d2c187792f1268ee1f44be5e35f37df5446ac635eb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 14 09:04:12.885705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4277228608.mount: Deactivated successfully. Feb 14 09:04:12.888501 containerd[1437]: time="2025-02-14T09:04:12.888451913Z" level=info msg="CreateContainer within sandbox \"f16b7f29bec1ec8bd9c679d2c187792f1268ee1f44be5e35f37df5446ac635eb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d065710a050272c912431f982fef8173a23ea307b386ba7b64a736370f755fe1\"" Feb 14 09:04:12.890510 containerd[1437]: time="2025-02-14T09:04:12.890469340Z" level=info msg="StartContainer for \"d065710a050272c912431f982fef8173a23ea307b386ba7b64a736370f755fe1\"" Feb 14 09:04:12.917811 systemd[1]: Started cri-containerd-d065710a050272c912431f982fef8173a23ea307b386ba7b64a736370f755fe1.scope - libcontainer container d065710a050272c912431f982fef8173a23ea307b386ba7b64a736370f755fe1. Feb 14 09:04:12.946055 containerd[1437]: time="2025-02-14T09:04:12.946005481Z" level=info msg="StartContainer for \"d065710a050272c912431f982fef8173a23ea307b386ba7b64a736370f755fe1\" returns successfully" Feb 14 09:04:13.811981 kubelet[2510]: E0214 09:04:13.811913 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:04:13.822421 kubelet[2510]: I0214 09:04:13.822352 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-46t8f" podStartSLOduration=18.822303658 podStartE2EDuration="18.822303658s" podCreationTimestamp="2025-02-14 09:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 09:04:13.820389153 +0000 UTC m=+33.186033471" watchObservedRunningTime="2025-02-14 09:04:13.822303658 +0000 UTC m=+33.187948016" Feb 14 09:04:13.893717 systemd-networkd[1382]: veth17afdc0d: Gained IPv6LL Feb 14 09:04:14.149757 systemd-networkd[1382]: cni0: Gained IPv6LL Feb 14 09:04:14.812741 kubelet[2510]: E0214 09:04:14.812707 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:04:15.739668 kubelet[2510]: E0214 09:04:15.739555 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:04:15.740003 containerd[1437]: time="2025-02-14T09:04:15.739940789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g4nwv,Uid:8282477f-b3f4-48bc-bf85-ae1dbd480c66,Namespace:kube-system,Attempt:0,}" Feb 14 09:04:15.759583 systemd-networkd[1382]: vethaf7e7ee8: Link UP Feb 14 09:04:15.761586 kernel: cni0: port 2(vethaf7e7ee8) entered blocking state Feb 14 09:04:15.761671 kernel: cni0: port 2(vethaf7e7ee8) entered disabled state Feb 14 09:04:15.761689 kernel: vethaf7e7ee8: entered allmulticast mode Feb 14 09:04:15.761704 kernel: vethaf7e7ee8: entered promiscuous mode Feb 14 09:04:15.765313 systemd-networkd[1382]: vethaf7e7ee8: Gained carrier Feb 14 09:04:15.766842 kernel: cni0: port 2(vethaf7e7ee8) entered blocking state Feb 14 09:04:15.766907 kernel: cni0: port 2(vethaf7e7ee8) entered forwarding state Feb 14 09:04:15.767761 containerd[1437]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000012938), "name":"cbr0", "type":"bridge"} Feb 14 09:04:15.767761 containerd[1437]: delegateAdd: netconf sent to delegate plugin: Feb 14 09:04:15.786654 containerd[1437]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-14T09:04:15.786224990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 09:04:15.786654 containerd[1437]: time="2025-02-14T09:04:15.786502153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 09:04:15.786654 containerd[1437]: time="2025-02-14T09:04:15.786528113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 09:04:15.787559 containerd[1437]: time="2025-02-14T09:04:15.787496445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 09:04:15.808764 systemd[1]: Started cri-containerd-bfd4bacd7202e947c236861445cc8d5b3ed20c6a492a8fe1c0393db373e2be3b.scope - libcontainer container bfd4bacd7202e947c236861445cc8d5b3ed20c6a492a8fe1c0393db373e2be3b. Feb 14 09:04:15.818298 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 14 09:04:15.833353 containerd[1437]: time="2025-02-14T09:04:15.833308840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g4nwv,Uid:8282477f-b3f4-48bc-bf85-ae1dbd480c66,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfd4bacd7202e947c236861445cc8d5b3ed20c6a492a8fe1c0393db373e2be3b\"" Feb 14 09:04:15.834698 kubelet[2510]: E0214 09:04:15.834675 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:04:15.837334 containerd[1437]: time="2025-02-14T09:04:15.837300089Z" level=info msg="CreateContainer within sandbox \"bfd4bacd7202e947c236861445cc8d5b3ed20c6a492a8fe1c0393db373e2be3b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 14 09:04:15.848735 containerd[1437]: time="2025-02-14T09:04:15.848695387Z" level=info msg="CreateContainer within sandbox \"bfd4bacd7202e947c236861445cc8d5b3ed20c6a492a8fe1c0393db373e2be3b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"01a91240332832db78c272b7131eb53a7825678ad3fead826acd1453eaad9c68\"" Feb 14 09:04:15.850425 containerd[1437]: time="2025-02-14T09:04:15.850390608Z" level=info msg="StartContainer for \"01a91240332832db78c272b7131eb53a7825678ad3fead826acd1453eaad9c68\"" Feb 14 09:04:15.877778 systemd[1]: Started cri-containerd-01a91240332832db78c272b7131eb53a7825678ad3fead826acd1453eaad9c68.scope - libcontainer container 01a91240332832db78c272b7131eb53a7825678ad3fead826acd1453eaad9c68. Feb 14 09:04:15.902711 containerd[1437]: time="2025-02-14T09:04:15.902660761Z" level=info msg="StartContainer for \"01a91240332832db78c272b7131eb53a7825678ad3fead826acd1453eaad9c68\" returns successfully" Feb 14 09:04:16.759223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3274675985.mount: Deactivated successfully. Feb 14 09:04:16.817691 kubelet[2510]: E0214 09:04:16.817215 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:04:16.825631 kubelet[2510]: I0214 09:04:16.825550 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-g4nwv" podStartSLOduration=21.825531855 podStartE2EDuration="21.825531855s" podCreationTimestamp="2025-02-14 09:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 09:04:16.824480002 +0000 UTC m=+36.190124320" watchObservedRunningTime="2025-02-14 09:04:16.825531855 +0000 UTC m=+36.191176173" Feb 14 09:04:17.279204 systemd[1]: Started sshd@7-10.0.0.7:22-10.0.0.1:48864.service - OpenSSH per-connection server daemon (10.0.0.1:48864). Feb 14 09:04:17.323897 sshd[3494]: Accepted publickey for core from 10.0.0.1 port 48864 ssh2: RSA SHA256:nkzhV86wH9QcDRurhp7rPRyA4ZaXT3UfdFDNqPx4HW0 Feb 14 09:04:17.325435 sshd[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 09:04:17.329918 systemd-logind[1422]: New session 8 of user core. Feb 14 09:04:17.339777 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 14 09:04:17.447830 sshd[3494]: pam_unix(sshd:session): session closed for user core Feb 14 09:04:17.455215 systemd[1]: sshd@7-10.0.0.7:22-10.0.0.1:48864.service: Deactivated successfully. Feb 14 09:04:17.458027 systemd[1]: session-8.scope: Deactivated successfully. Feb 14 09:04:17.459642 systemd-logind[1422]: Session 8 logged out. Waiting for processes to exit. Feb 14 09:04:17.465861 systemd[1]: Started sshd@8-10.0.0.7:22-10.0.0.1:48872.service - OpenSSH per-connection server daemon (10.0.0.1:48872). Feb 14 09:04:17.467155 systemd-logind[1422]: Removed session 8. Feb 14 09:04:17.479121 systemd-networkd[1382]: vethaf7e7ee8: Gained IPv6LL Feb 14 09:04:17.497933 sshd[3510]: Accepted publickey for core from 10.0.0.1 port 48872 ssh2: RSA SHA256:nkzhV86wH9QcDRurhp7rPRyA4ZaXT3UfdFDNqPx4HW0 Feb 14 09:04:17.499406 sshd[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 09:04:17.503661 systemd-logind[1422]: New session 9 of user core. Feb 14 09:04:17.509765 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 14 09:04:17.646121 sshd[3510]: pam_unix(sshd:session): session closed for user core Feb 14 09:04:17.658212 systemd[1]: sshd@8-10.0.0.7:22-10.0.0.1:48872.service: Deactivated successfully. Feb 14 09:04:17.661791 systemd[1]: session-9.scope: Deactivated successfully. Feb 14 09:04:17.664651 systemd-logind[1422]: Session 9 logged out. Waiting for processes to exit. Feb 14 09:04:17.672102 systemd[1]: Started sshd@9-10.0.0.7:22-10.0.0.1:48882.service - OpenSSH per-connection server daemon (10.0.0.1:48882). Feb 14 09:04:17.675343 systemd-logind[1422]: Removed session 9. Feb 14 09:04:17.707416 sshd[3522]: Accepted publickey for core from 10.0.0.1 port 48882 ssh2: RSA SHA256:nkzhV86wH9QcDRurhp7rPRyA4ZaXT3UfdFDNqPx4HW0 Feb 14 09:04:17.708812 sshd[3522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 09:04:17.712662 systemd-logind[1422]: New session 10 of user core. Feb 14 09:04:17.728780 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 14 09:04:17.822050 kubelet[2510]: E0214 09:04:17.822006 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 14 09:04:17.843283 sshd[3522]: pam_unix(sshd:session): session closed for user core Feb 14 09:04:17.846916 systemd[1]: sshd@9-10.0.0.7:22-10.0.0.1:48882.service: Deactivated successfully. Feb 14 09:04:17.848879 systemd[1]: session-10.scope: Deactivated successfully. Feb 14 09:04:17.849676 systemd-logind[1422]: Session 10 logged out. Waiting for processes to exit. Feb 14 09:04:17.850437 systemd-logind[1422]: Removed session 10. Feb 14 09:04:22.876910 systemd[1]: Started sshd@10-10.0.0.7:22-10.0.0.1:56192.service - OpenSSH per-connection server daemon (10.0.0.1:56192). Feb 14 09:04:22.909914 sshd[3559]: Accepted publickey for core from 10.0.0.1 port 56192 ssh2: RSA SHA256:nkzhV86wH9QcDRurhp7rPRyA4ZaXT3UfdFDNqPx4HW0 Feb 14 09:04:22.911525 sshd[3559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 09:04:22.914934 systemd-logind[1422]: New session 11 of user core. Feb 14 09:04:22.931842 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 14 09:04:23.043361 sshd[3559]: pam_unix(sshd:session): session closed for user core Feb 14 09:04:23.053158 systemd[1]: sshd@10-10.0.0.7:22-10.0.0.1:56192.service: Deactivated successfully. Feb 14 09:04:23.054694 systemd[1]: session-11.scope: Deactivated successfully. Feb 14 09:04:23.056050 systemd-logind[1422]: Session 11 logged out. Waiting for processes to exit. Feb 14 09:04:23.068900 systemd[1]: Started sshd@11-10.0.0.7:22-10.0.0.1:56198.service - OpenSSH per-connection server daemon (10.0.0.1:56198). Feb 14 09:04:23.069809 systemd-logind[1422]: Removed session 11. Feb 14 09:04:23.100772 sshd[3573]: Accepted publickey for core from 10.0.0.1 port 56198 ssh2: RSA SHA256:nkzhV86wH9QcDRurhp7rPRyA4ZaXT3UfdFDNqPx4HW0 Feb 14 09:04:23.101963 sshd[3573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 09:04:23.105721 systemd-logind[1422]: New session 12 of user core. Feb 14 09:04:23.123792 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 14 09:04:23.297151 sshd[3573]: pam_unix(sshd:session): session closed for user core Feb 14 09:04:23.306152 systemd[1]: sshd@11-10.0.0.7:22-10.0.0.1:56198.service: Deactivated successfully. Feb 14 09:04:23.309658 systemd[1]: session-12.scope: Deactivated successfully. Feb 14 09:04:23.310913 systemd-logind[1422]: Session 12 logged out. Waiting for processes to exit. Feb 14 09:04:23.320890 systemd[1]: Started sshd@12-10.0.0.7:22-10.0.0.1:56210.service - OpenSSH per-connection server daemon (10.0.0.1:56210). Feb 14 09:04:23.321829 systemd-logind[1422]: Removed session 12. Feb 14 09:04:23.354323 sshd[3585]: Accepted publickey for core from 10.0.0.1 port 56210 ssh2: RSA SHA256:nkzhV86wH9QcDRurhp7rPRyA4ZaXT3UfdFDNqPx4HW0 Feb 14 09:04:23.355737 sshd[3585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 09:04:23.360655 systemd-logind[1422]: New session 13 of user core. Feb 14 09:04:23.366753 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 14 09:04:24.599856 sshd[3585]: pam_unix(sshd:session): session closed for user core Feb 14 09:04:24.607460 systemd[1]: sshd@12-10.0.0.7:22-10.0.0.1:56210.service: Deactivated successfully. Feb 14 09:04:24.610224 systemd[1]: session-13.scope: Deactivated successfully. Feb 14 09:04:24.613721 systemd-logind[1422]: Session 13 logged out. Waiting for processes to exit. Feb 14 09:04:24.622943 systemd[1]: Started sshd@13-10.0.0.7:22-10.0.0.1:56222.service - OpenSSH per-connection server daemon (10.0.0.1:56222). Feb 14 09:04:24.624117 systemd-logind[1422]: Removed session 13. Feb 14 09:04:24.655314 sshd[3607]: Accepted publickey for core from 10.0.0.1 port 56222 ssh2: RSA SHA256:nkzhV86wH9QcDRurhp7rPRyA4ZaXT3UfdFDNqPx4HW0 Feb 14 09:04:24.656690 sshd[3607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 09:04:24.660360 systemd-logind[1422]: New session 14 of user core. Feb 14 09:04:24.670769 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 14 09:04:24.875508 sshd[3607]: pam_unix(sshd:session): session closed for user core Feb 14 09:04:24.888271 systemd[1]: sshd@13-10.0.0.7:22-10.0.0.1:56222.service: Deactivated successfully. Feb 14 09:04:24.889960 systemd[1]: session-14.scope: Deactivated successfully. Feb 14 09:04:24.891811 systemd-logind[1422]: Session 14 logged out. Waiting for processes to exit. Feb 14 09:04:24.904964 systemd[1]: Started sshd@14-10.0.0.7:22-10.0.0.1:56236.service - OpenSSH per-connection server daemon (10.0.0.1:56236). Feb 14 09:04:24.905943 systemd-logind[1422]: Removed session 14. Feb 14 09:04:24.936463 sshd[3619]: Accepted publickey for core from 10.0.0.1 port 56236 ssh2: RSA SHA256:nkzhV86wH9QcDRurhp7rPRyA4ZaXT3UfdFDNqPx4HW0 Feb 14 09:04:24.937929 sshd[3619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 09:04:24.941578 systemd-logind[1422]: New session 15 of user core. Feb 14 09:04:24.951819 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 14 09:04:25.057671 sshd[3619]: pam_unix(sshd:session): session closed for user core Feb 14 09:04:25.061083 systemd[1]: sshd@14-10.0.0.7:22-10.0.0.1:56236.service: Deactivated successfully. Feb 14 09:04:25.062850 systemd[1]: session-15.scope: Deactivated successfully. Feb 14 09:04:25.063471 systemd-logind[1422]: Session 15 logged out. Waiting for processes to exit. Feb 14 09:04:25.064271 systemd-logind[1422]: Removed session 15. Feb 14 09:04:30.068055 systemd[1]: Started sshd@15-10.0.0.7:22-10.0.0.1:56238.service - OpenSSH per-connection server daemon (10.0.0.1:56238). Feb 14 09:04:30.103411 sshd[3660]: Accepted publickey for core from 10.0.0.1 port 56238 ssh2: RSA SHA256:nkzhV86wH9QcDRurhp7rPRyA4ZaXT3UfdFDNqPx4HW0 Feb 14 09:04:30.104695 sshd[3660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 09:04:30.107975 systemd-logind[1422]: New session 16 of user core. Feb 14 09:04:30.121726 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 14 09:04:30.222409 sshd[3660]: pam_unix(sshd:session): session closed for user core Feb 14 09:04:30.225342 systemd[1]: sshd@15-10.0.0.7:22-10.0.0.1:56238.service: Deactivated successfully. Feb 14 09:04:30.228956 systemd[1]: session-16.scope: Deactivated successfully. Feb 14 09:04:30.229531 systemd-logind[1422]: Session 16 logged out. Waiting for processes to exit. Feb 14 09:04:30.230497 systemd-logind[1422]: Removed session 16. Feb 14 09:04:35.233243 systemd[1]: Started sshd@16-10.0.0.7:22-10.0.0.1:50014.service - OpenSSH per-connection server daemon (10.0.0.1:50014). Feb 14 09:04:35.268590 sshd[3696]: Accepted publickey for core from 10.0.0.1 port 50014 ssh2: RSA SHA256:nkzhV86wH9QcDRurhp7rPRyA4ZaXT3UfdFDNqPx4HW0 Feb 14 09:04:35.269811 sshd[3696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 09:04:35.273354 systemd-logind[1422]: New session 17 of user core. Feb 14 09:04:35.281728 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 14 09:04:35.387283 sshd[3696]: pam_unix(sshd:session): session closed for user core Feb 14 09:04:35.390618 systemd[1]: sshd@16-10.0.0.7:22-10.0.0.1:50014.service: Deactivated successfully. Feb 14 09:04:35.392293 systemd[1]: session-17.scope: Deactivated successfully. Feb 14 09:04:35.393215 systemd-logind[1422]: Session 17 logged out. Waiting for processes to exit. Feb 14 09:04:35.394129 systemd-logind[1422]: Removed session 17. Feb 14 09:04:40.401980 systemd[1]: Started sshd@17-10.0.0.7:22-10.0.0.1:50016.service - OpenSSH per-connection server daemon (10.0.0.1:50016). Feb 14 09:04:40.437497 sshd[3733]: Accepted publickey for core from 10.0.0.1 port 50016 ssh2: RSA SHA256:nkzhV86wH9QcDRurhp7rPRyA4ZaXT3UfdFDNqPx4HW0 Feb 14 09:04:40.438711 sshd[3733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 09:04:40.443269 systemd-logind[1422]: New session 18 of user core. Feb 14 09:04:40.452733 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 14 09:04:40.555653 sshd[3733]: pam_unix(sshd:session): session closed for user core Feb 14 09:04:40.559299 systemd[1]: sshd@17-10.0.0.7:22-10.0.0.1:50016.service: Deactivated successfully. Feb 14 09:04:40.561028 systemd[1]: session-18.scope: Deactivated successfully. Feb 14 09:04:40.561636 systemd-logind[1422]: Session 18 logged out. Waiting for processes to exit. Feb 14 09:04:40.562475 systemd-logind[1422]: Removed session 18.