Feb 13 18:49:43.881971 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 18:49:43.881993 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:29:42 -00 2025 Feb 13 18:49:43.882004 kernel: KASLR enabled Feb 13 18:49:43.882010 kernel: efi: EFI v2.7 by EDK II Feb 13 18:49:43.882015 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Feb 13 18:49:43.882021 kernel: random: crng init done Feb 13 18:49:43.882027 kernel: secureboot: Secure boot disabled Feb 13 18:49:43.882033 kernel: ACPI: Early table checksum verification disabled Feb 13 18:49:43.882039 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 18:49:43.882047 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 18:49:43.882053 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:49:43.882058 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:49:43.882064 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:49:43.882070 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:49:43.882077 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:49:43.882084 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:49:43.882091 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:49:43.882097 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:49:43.882103 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:49:43.882109 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 18:49:43.882115 kernel: NUMA: Failed to initialise from firmware Feb 13 18:49:43.882121 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 18:49:43.882127 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 18:49:43.882133 kernel: Zone ranges: Feb 13 18:49:43.882139 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 18:49:43.882146 kernel: DMA32 empty Feb 13 18:49:43.882152 kernel: Normal empty Feb 13 18:49:43.882158 kernel: Movable zone start for each node Feb 13 18:49:43.882164 kernel: Early memory node ranges Feb 13 18:49:43.882171 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Feb 13 18:49:43.882177 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Feb 13 18:49:43.882183 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Feb 13 18:49:43.882189 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 18:49:43.882195 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 18:49:43.882201 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 18:49:43.882207 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 18:49:43.882213 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 18:49:43.882220 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 18:49:43.882226 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 18:49:43.882233 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 18:49:43.882241 kernel: psci: probing for conduit method from ACPI. Feb 13 18:49:43.882248 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 18:49:43.882254 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 18:49:43.882262 kernel: psci: Trusted OS migration not required Feb 13 18:49:43.882276 kernel: psci: SMC Calling Convention v1.1 Feb 13 18:49:43.882283 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 18:49:43.882290 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 18:49:43.882297 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 18:49:43.882303 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 18:49:43.882310 kernel: Detected PIPT I-cache on CPU0 Feb 13 18:49:43.882316 kernel: CPU features: detected: GIC system register CPU interface Feb 13 18:49:43.882323 kernel: CPU features: detected: Hardware dirty bit management Feb 13 18:49:43.882329 kernel: CPU features: detected: Spectre-v4 Feb 13 18:49:43.882337 kernel: CPU features: detected: Spectre-BHB Feb 13 18:49:43.882344 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 18:49:43.882350 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 18:49:43.882357 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 18:49:43.882363 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 18:49:43.882369 kernel: alternatives: applying boot alternatives Feb 13 18:49:43.882377 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 18:49:43.882384 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 18:49:43.882390 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 18:49:43.882396 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 18:49:43.882403 kernel: Fallback order for Node 0: 0 Feb 13 18:49:43.882411 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 18:49:43.882417 kernel: Policy zone: DMA Feb 13 18:49:43.882423 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 18:49:43.882430 kernel: software IO TLB: area num 4. Feb 13 18:49:43.882436 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 18:49:43.882443 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved) Feb 13 18:49:43.882449 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 18:49:43.882456 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 18:49:43.882463 kernel: rcu: RCU event tracing is enabled. Feb 13 18:49:43.882470 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 18:49:43.882476 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 18:49:43.882483 kernel: Tracing variant of Tasks RCU enabled. Feb 13 18:49:43.882491 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 18:49:43.882498 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 18:49:43.882504 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 18:49:43.882511 kernel: GICv3: 256 SPIs implemented Feb 13 18:49:43.882517 kernel: GICv3: 0 Extended SPIs implemented Feb 13 18:49:43.882523 kernel: Root IRQ handler: gic_handle_irq Feb 13 18:49:43.882529 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 18:49:43.882536 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 18:49:43.882542 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 18:49:43.882549 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 18:49:43.882555 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 18:49:43.882563 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 18:49:43.882570 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 18:49:43.882576 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 18:49:43.882583 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 18:49:43.882589 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 18:49:43.882596 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 18:49:43.882602 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 18:49:43.882609 kernel: arm-pv: using stolen time PV Feb 13 18:49:43.882615 kernel: Console: colour dummy device 80x25 Feb 13 18:49:43.882622 kernel: ACPI: Core revision 20230628 Feb 13 18:49:43.882648 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 18:49:43.882657 kernel: pid_max: default: 32768 minimum: 301 Feb 13 18:49:43.882663 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 18:49:43.882670 kernel: landlock: Up and running. Feb 13 18:49:43.882676 kernel: SELinux: Initializing. Feb 13 18:49:43.882683 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 18:49:43.882690 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 18:49:43.882697 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 18:49:43.882704 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 18:49:43.882710 kernel: rcu: Hierarchical SRCU implementation. Feb 13 18:49:43.882718 kernel: rcu: Max phase no-delay instances is 400. Feb 13 18:49:43.882725 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 18:49:43.882731 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 18:49:43.882738 kernel: Remapping and enabling EFI services. Feb 13 18:49:43.882744 kernel: smp: Bringing up secondary CPUs ... Feb 13 18:49:43.882751 kernel: Detected PIPT I-cache on CPU1 Feb 13 18:49:43.882758 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 18:49:43.882764 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 18:49:43.882771 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 18:49:43.882779 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 18:49:43.882786 kernel: Detected PIPT I-cache on CPU2 Feb 13 18:49:43.882797 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 18:49:43.882806 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 18:49:43.882813 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 18:49:43.882820 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 18:49:43.882827 kernel: Detected PIPT I-cache on CPU3 Feb 13 18:49:43.882834 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 18:49:43.882841 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 18:49:43.882849 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 18:49:43.882856 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 18:49:43.882863 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 18:49:43.882870 kernel: SMP: Total of 4 processors activated. Feb 13 18:49:43.882877 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 18:49:43.882883 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 18:49:43.882890 kernel: CPU features: detected: Common not Private translations Feb 13 18:49:43.882897 kernel: CPU features: detected: CRC32 instructions Feb 13 18:49:43.882906 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 18:49:43.882913 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 18:49:43.882920 kernel: CPU features: detected: LSE atomic instructions Feb 13 18:49:43.882927 kernel: CPU features: detected: Privileged Access Never Feb 13 18:49:43.882934 kernel: CPU features: detected: RAS Extension Support Feb 13 18:49:43.882941 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 18:49:43.882947 kernel: CPU: All CPU(s) started at EL1 Feb 13 18:49:43.882954 kernel: alternatives: applying system-wide alternatives Feb 13 18:49:43.882961 kernel: devtmpfs: initialized Feb 13 18:49:43.882968 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 18:49:43.882977 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 18:49:43.882984 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 18:49:43.882991 kernel: SMBIOS 3.0.0 present. Feb 13 18:49:43.882998 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 18:49:43.883005 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 18:49:43.883012 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 18:49:43.883019 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 18:49:43.883026 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 18:49:43.883033 kernel: audit: initializing netlink subsys (disabled) Feb 13 18:49:43.883042 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 18:49:43.883049 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 18:49:43.883056 kernel: cpuidle: using governor menu Feb 13 18:49:43.883062 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 18:49:43.883069 kernel: ASID allocator initialised with 32768 entries Feb 13 18:49:43.883076 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 18:49:43.883083 kernel: Serial: AMBA PL011 UART driver Feb 13 18:49:43.883090 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 18:49:43.883097 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 18:49:43.883106 kernel: Modules: 508880 pages in range for PLT usage Feb 13 18:49:43.883113 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 18:49:43.883120 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 18:49:43.883127 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 18:49:43.883134 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 18:49:43.883141 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 18:49:43.883148 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 18:49:43.883155 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 18:49:43.883162 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 18:49:43.883171 kernel: ACPI: Added _OSI(Module Device) Feb 13 18:49:43.883177 kernel: ACPI: Added _OSI(Processor Device) Feb 13 18:49:43.883185 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 18:49:43.883192 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 18:49:43.883198 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 18:49:43.883205 kernel: ACPI: Interpreter enabled Feb 13 18:49:43.883212 kernel: ACPI: Using GIC for interrupt routing Feb 13 18:49:43.883219 kernel: ACPI: MCFG table detected, 1 entries Feb 13 18:49:43.883226 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 18:49:43.883235 kernel: printk: console [ttyAMA0] enabled Feb 13 18:49:43.883242 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 18:49:43.883392 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 18:49:43.883469 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 18:49:43.883537 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 18:49:43.883602 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 18:49:43.883750 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 18:49:43.883765 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 18:49:43.883772 kernel: PCI host bridge to bus 0000:00 Feb 13 18:49:43.883847 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 18:49:43.883908 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 18:49:43.883968 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 18:49:43.884061 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 18:49:43.884140 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 18:49:43.884220 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 18:49:43.884301 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 18:49:43.884372 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 18:49:43.884440 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 18:49:43.884520 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 18:49:43.884603 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 18:49:43.884682 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 18:49:43.884750 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 18:49:43.884810 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 18:49:43.884869 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 18:49:43.884878 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 18:49:43.884885 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 18:49:43.884892 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 18:49:43.884899 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 18:49:43.884906 kernel: iommu: Default domain type: Translated Feb 13 18:49:43.884915 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 18:49:43.884923 kernel: efivars: Registered efivars operations Feb 13 18:49:43.884929 kernel: vgaarb: loaded Feb 13 18:49:43.884936 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 18:49:43.884943 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 18:49:43.884950 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 18:49:43.884958 kernel: pnp: PnP ACPI init Feb 13 18:49:43.885036 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 18:49:43.885056 kernel: pnp: PnP ACPI: found 1 devices Feb 13 18:49:43.885063 kernel: NET: Registered PF_INET protocol family Feb 13 18:49:43.885070 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 18:49:43.885077 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 18:49:43.885084 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 18:49:43.885091 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 18:49:43.885099 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 18:49:43.885106 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 18:49:43.885113 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 18:49:43.885121 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 18:49:43.885128 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 18:49:43.885135 kernel: PCI: CLS 0 bytes, default 64 Feb 13 18:49:43.885142 kernel: kvm [1]: HYP mode not available Feb 13 18:49:43.885149 kernel: Initialise system trusted keyrings Feb 13 18:49:43.885156 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 18:49:43.885163 kernel: Key type asymmetric registered Feb 13 18:49:43.885170 kernel: Asymmetric key parser 'x509' registered Feb 13 18:49:43.885177 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 18:49:43.885186 kernel: io scheduler mq-deadline registered Feb 13 18:49:43.885193 kernel: io scheduler kyber registered Feb 13 18:49:43.885200 kernel: io scheduler bfq registered Feb 13 18:49:43.885207 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 18:49:43.885214 kernel: ACPI: button: Power Button [PWRB] Feb 13 18:49:43.885221 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 18:49:43.885296 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 18:49:43.885307 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 18:49:43.885314 kernel: thunder_xcv, ver 1.0 Feb 13 18:49:43.885332 kernel: thunder_bgx, ver 1.0 Feb 13 18:49:43.885338 kernel: nicpf, ver 1.0 Feb 13 18:49:43.885345 kernel: nicvf, ver 1.0 Feb 13 18:49:43.885419 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 18:49:43.885484 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T18:49:43 UTC (1739472583) Feb 13 18:49:43.885493 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 18:49:43.885501 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 18:49:43.885508 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 18:49:43.885517 kernel: watchdog: Hard watchdog permanently disabled Feb 13 18:49:43.885524 kernel: NET: Registered PF_INET6 protocol family Feb 13 18:49:43.885531 kernel: Segment Routing with IPv6 Feb 13 18:49:43.885538 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 18:49:43.885545 kernel: NET: Registered PF_PACKET protocol family Feb 13 18:49:43.885552 kernel: Key type dns_resolver registered Feb 13 18:49:43.885559 kernel: registered taskstats version 1 Feb 13 18:49:43.885566 kernel: Loading compiled-in X.509 certificates Feb 13 18:49:43.885577 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 987d382bd4f498c8030ef29b348ef5d6fcf1f0e3' Feb 13 18:49:43.885588 kernel: Key type .fscrypt registered Feb 13 18:49:43.885595 kernel: Key type fscrypt-provisioning registered Feb 13 18:49:43.885602 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 18:49:43.885609 kernel: ima: Allocated hash algorithm: sha1 Feb 13 18:49:43.885616 kernel: ima: No architecture policies found Feb 13 18:49:43.885623 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 18:49:43.885670 kernel: clk: Disabling unused clocks Feb 13 18:49:43.885677 kernel: Freeing unused kernel memory: 39936K Feb 13 18:49:43.885684 kernel: Run /init as init process Feb 13 18:49:43.885693 kernel: with arguments: Feb 13 18:49:43.885700 kernel: /init Feb 13 18:49:43.885707 kernel: with environment: Feb 13 18:49:43.885713 kernel: HOME=/ Feb 13 18:49:43.885721 kernel: TERM=linux Feb 13 18:49:43.885728 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 18:49:43.885737 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 18:49:43.885746 systemd[1]: Detected virtualization kvm. Feb 13 18:49:43.885755 systemd[1]: Detected architecture arm64. Feb 13 18:49:43.885762 systemd[1]: Running in initrd. Feb 13 18:49:43.885769 systemd[1]: No hostname configured, using default hostname. Feb 13 18:49:43.885776 systemd[1]: Hostname set to . Feb 13 18:49:43.885784 systemd[1]: Initializing machine ID from VM UUID. Feb 13 18:49:43.885792 systemd[1]: Queued start job for default target initrd.target. Feb 13 18:49:43.885799 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:49:43.885807 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:49:43.885817 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 18:49:43.885825 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 18:49:43.885833 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 18:49:43.885841 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 18:49:43.885859 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 18:49:43.885867 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 18:49:43.885884 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:49:43.885892 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:49:43.885907 systemd[1]: Reached target paths.target - Path Units. Feb 13 18:49:43.885914 systemd[1]: Reached target slices.target - Slice Units. Feb 13 18:49:43.885928 systemd[1]: Reached target swap.target - Swaps. Feb 13 18:49:43.885936 systemd[1]: Reached target timers.target - Timer Units. Feb 13 18:49:43.885943 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 18:49:43.885951 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 18:49:43.885959 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 18:49:43.885967 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 18:49:43.885975 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:49:43.885983 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 18:49:43.885991 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:49:43.885998 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 18:49:43.886006 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 18:49:43.886014 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 18:49:43.886021 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 18:49:43.886028 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 18:49:43.886038 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 18:49:43.886045 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 18:49:43.886053 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:49:43.886061 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 18:49:43.886068 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:49:43.886076 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 18:49:43.886085 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 18:49:43.886111 systemd-journald[237]: Collecting audit messages is disabled. Feb 13 18:49:43.886132 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:49:43.886140 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 18:49:43.886148 systemd-journald[237]: Journal started Feb 13 18:49:43.886171 systemd-journald[237]: Runtime Journal (/run/log/journal/cbef145d442941f39492dbe06aecd81f) is 5.9M, max 47.3M, 41.4M free. Feb 13 18:49:43.878762 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 18:49:43.887999 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:49:43.890691 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 18:49:43.890729 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 18:49:43.892720 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 18:49:43.893935 kernel: Bridge firewalling registered Feb 13 18:49:43.894345 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 18:49:43.895296 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 18:49:43.898363 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 18:49:43.899773 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 18:49:43.902492 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:49:43.914015 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:49:43.915932 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:49:43.918839 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 18:49:43.922673 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:49:43.927014 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 18:49:43.932581 dracut-cmdline[276]: dracut-dracut-053 Feb 13 18:49:43.935100 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 18:49:43.957707 systemd-resolved[278]: Positive Trust Anchors: Feb 13 18:49:43.957727 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 18:49:43.957759 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 18:49:43.962586 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 13 18:49:43.964339 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 18:49:43.965261 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:49:44.001661 kernel: SCSI subsystem initialized Feb 13 18:49:44.006653 kernel: Loading iSCSI transport class v2.0-870. Feb 13 18:49:44.013657 kernel: iscsi: registered transport (tcp) Feb 13 18:49:44.026658 kernel: iscsi: registered transport (qla4xxx) Feb 13 18:49:44.026689 kernel: QLogic iSCSI HBA Driver Feb 13 18:49:44.067984 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 18:49:44.082807 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 18:49:44.099064 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 18:49:44.099117 kernel: device-mapper: uevent: version 1.0.3 Feb 13 18:49:44.100124 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 18:49:44.144690 kernel: raid6: neonx8 gen() 15675 MB/s Feb 13 18:49:44.161644 kernel: raid6: neonx4 gen() 15720 MB/s Feb 13 18:49:44.178679 kernel: raid6: neonx2 gen() 13092 MB/s Feb 13 18:49:44.195645 kernel: raid6: neonx1 gen() 10425 MB/s Feb 13 18:49:44.212666 kernel: raid6: int64x8 gen() 6738 MB/s Feb 13 18:49:44.229650 kernel: raid6: int64x4 gen() 7306 MB/s Feb 13 18:49:44.246648 kernel: raid6: int64x2 gen() 6060 MB/s Feb 13 18:49:44.263642 kernel: raid6: int64x1 gen() 5028 MB/s Feb 13 18:49:44.263657 kernel: raid6: using algorithm neonx4 gen() 15720 MB/s Feb 13 18:49:44.280658 kernel: raid6: .... xor() 12259 MB/s, rmw enabled Feb 13 18:49:44.280674 kernel: raid6: using neon recovery algorithm Feb 13 18:49:44.285648 kernel: xor: measuring software checksum speed Feb 13 18:49:44.285665 kernel: 8regs : 21118 MB/sec Feb 13 18:49:44.287055 kernel: 32regs : 19884 MB/sec Feb 13 18:49:44.287074 kernel: arm64_neon : 26981 MB/sec Feb 13 18:49:44.287083 kernel: xor: using function: arm64_neon (26981 MB/sec) Feb 13 18:49:44.337682 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 18:49:44.348612 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 18:49:44.359861 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:49:44.370537 systemd-udevd[462]: Using default interface naming scheme 'v255'. Feb 13 18:49:44.373692 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:49:44.375872 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 18:49:44.390066 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Feb 13 18:49:44.416898 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 18:49:44.435840 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 18:49:44.482652 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:49:44.516843 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 18:49:44.531705 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 18:49:44.533339 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 18:49:44.536617 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:49:44.539286 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 18:49:44.547281 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 18:49:44.558155 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 18:49:44.558272 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 18:49:44.558295 kernel: GPT:9289727 != 19775487 Feb 13 18:49:44.558304 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 18:49:44.558313 kernel: GPT:9289727 != 19775487 Feb 13 18:49:44.558322 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 18:49:44.558331 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 18:49:44.556799 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 18:49:44.562624 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 18:49:44.562758 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:49:44.564531 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:49:44.565417 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 18:49:44.565594 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:49:44.569597 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:49:44.577654 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (523) Feb 13 18:49:44.581567 kernel: BTRFS: device fsid 55beb02a-1d0d-4a3e-812c-2737f0301ec8 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (521) Feb 13 18:49:44.580362 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:49:44.583678 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 18:49:44.592620 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:49:44.603106 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 18:49:44.607458 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 18:49:44.611755 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 18:49:44.615428 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 18:49:44.616408 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 18:49:44.626768 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 18:49:44.628331 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:49:44.633099 disk-uuid[553]: Primary Header is updated. Feb 13 18:49:44.633099 disk-uuid[553]: Secondary Entries is updated. Feb 13 18:49:44.633099 disk-uuid[553]: Secondary Header is updated. Feb 13 18:49:44.636660 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 18:49:44.653039 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:49:45.653407 disk-uuid[554]: The operation has completed successfully. Feb 13 18:49:45.654290 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 18:49:45.676390 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 18:49:45.676483 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 18:49:45.697847 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 18:49:45.700360 sh[574]: Success Feb 13 18:49:45.718094 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 18:49:45.755174 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 18:49:45.756745 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 18:49:45.757467 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 18:49:45.767089 kernel: BTRFS info (device dm-0): first mount of filesystem 55beb02a-1d0d-4a3e-812c-2737f0301ec8 Feb 13 18:49:45.767711 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:49:45.767730 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 18:49:45.767904 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 18:49:45.768928 kernel: BTRFS info (device dm-0): using free space tree Feb 13 18:49:45.772681 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 18:49:45.773460 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 18:49:45.785844 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 18:49:45.787195 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 18:49:45.794925 kernel: BTRFS info (device vda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:49:45.794962 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:49:45.794977 kernel: BTRFS info (device vda6): using free space tree Feb 13 18:49:45.797849 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 18:49:45.803936 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 18:49:45.805701 kernel: BTRFS info (device vda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:49:45.811689 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 18:49:45.821964 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 18:49:45.887555 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 18:49:45.897775 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 18:49:45.917267 ignition[663]: Ignition 2.20.0 Feb 13 18:49:45.917277 ignition[663]: Stage: fetch-offline Feb 13 18:49:45.917308 ignition[663]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:49:45.917316 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 18:49:45.917475 ignition[663]: parsed url from cmdline: "" Feb 13 18:49:45.917478 ignition[663]: no config URL provided Feb 13 18:49:45.917483 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 18:49:45.921703 systemd-networkd[765]: lo: Link UP Feb 13 18:49:45.917489 ignition[663]: no config at "/usr/lib/ignition/user.ign" Feb 13 18:49:45.921706 systemd-networkd[765]: lo: Gained carrier Feb 13 18:49:45.917514 ignition[663]: op(1): [started] loading QEMU firmware config module Feb 13 18:49:45.922484 systemd-networkd[765]: Enumeration completed Feb 13 18:49:45.917519 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 18:49:45.922901 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 18:49:45.922937 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:49:45.928380 ignition[663]: op(1): [finished] loading QEMU firmware config module Feb 13 18:49:45.922940 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 18:49:45.923722 systemd-networkd[765]: eth0: Link UP Feb 13 18:49:45.923725 systemd-networkd[765]: eth0: Gained carrier Feb 13 18:49:45.923731 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:49:45.924404 systemd[1]: Reached target network.target - Network. Feb 13 18:49:45.944684 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 18:49:45.954136 ignition[663]: parsing config with SHA512: 06fb6ce95c9eb39c09ad62978119df7a119293ce035e89329522a33af39c526055f4ca027ab6d62c9b3670af08f1824229d6b766ad54dbdb32bd4e31c1636098 Feb 13 18:49:45.959726 unknown[663]: fetched base config from "system" Feb 13 18:49:45.959743 unknown[663]: fetched user config from "qemu" Feb 13 18:49:45.960581 ignition[663]: fetch-offline: fetch-offline passed Feb 13 18:49:45.961999 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 18:49:45.960677 ignition[663]: Ignition finished successfully Feb 13 18:49:45.963598 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 18:49:45.979858 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 18:49:45.990127 ignition[772]: Ignition 2.20.0 Feb 13 18:49:45.990137 ignition[772]: Stage: kargs Feb 13 18:49:45.990296 ignition[772]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:49:45.990306 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 18:49:45.991164 ignition[772]: kargs: kargs passed Feb 13 18:49:45.991206 ignition[772]: Ignition finished successfully Feb 13 18:49:45.993146 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 18:49:46.006770 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 18:49:46.015449 ignition[782]: Ignition 2.20.0 Feb 13 18:49:46.015456 ignition[782]: Stage: disks Feb 13 18:49:46.015609 ignition[782]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:49:46.015621 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 18:49:46.017575 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 18:49:46.016411 ignition[782]: disks: disks passed Feb 13 18:49:46.018708 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 18:49:46.016453 ignition[782]: Ignition finished successfully Feb 13 18:49:46.019886 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 18:49:46.021067 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 18:49:46.022479 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 18:49:46.023613 systemd[1]: Reached target basic.target - Basic System. Feb 13 18:49:46.034763 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 18:49:46.043793 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 18:49:46.046997 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 18:49:46.049231 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 18:49:46.091644 kernel: EXT4-fs (vda9): mounted filesystem 005a6458-8fd3-46f1-ab43-85ef18df7ccd r/w with ordered data mode. Quota mode: none. Feb 13 18:49:46.091915 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 18:49:46.092902 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 18:49:46.111708 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 18:49:46.113124 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 18:49:46.114276 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 18:49:46.114313 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 18:49:46.121587 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (801) Feb 13 18:49:46.121614 kernel: BTRFS info (device vda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:49:46.121625 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:49:46.121645 kernel: BTRFS info (device vda6): using free space tree Feb 13 18:49:46.114334 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 18:49:46.120588 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 18:49:46.125023 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 18:49:46.122971 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 18:49:46.125702 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 18:49:46.161992 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 18:49:46.165643 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Feb 13 18:49:46.168675 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 18:49:46.172517 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 18:49:46.244934 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 18:49:46.258752 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 18:49:46.260127 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 18:49:46.264651 kernel: BTRFS info (device vda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:49:46.280156 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 18:49:46.284643 ignition[915]: INFO : Ignition 2.20.0 Feb 13 18:49:46.284643 ignition[915]: INFO : Stage: mount Feb 13 18:49:46.284643 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:49:46.284643 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 18:49:46.287418 ignition[915]: INFO : mount: mount passed Feb 13 18:49:46.287418 ignition[915]: INFO : Ignition finished successfully Feb 13 18:49:46.286749 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 18:49:46.297803 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 18:49:46.766576 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 18:49:46.779789 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 18:49:46.785863 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (930) Feb 13 18:49:46.785897 kernel: BTRFS info (device vda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:49:46.785908 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:49:46.787063 kernel: BTRFS info (device vda6): using free space tree Feb 13 18:49:46.789659 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 18:49:46.789976 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 18:49:46.805977 ignition[947]: INFO : Ignition 2.20.0 Feb 13 18:49:46.805977 ignition[947]: INFO : Stage: files Feb 13 18:49:46.807170 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:49:46.807170 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 18:49:46.807170 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Feb 13 18:49:46.809650 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 18:49:46.809650 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 18:49:46.812455 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 18:49:46.813514 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 18:49:46.813514 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 18:49:46.812929 unknown[947]: wrote ssh authorized keys file for user: core Feb 13 18:49:46.816328 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 18:49:46.816328 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Feb 13 18:49:46.876971 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 18:49:47.192817 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 18:49:47.192817 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 18:49:47.195779 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 18:49:47.195779 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 18:49:47.195779 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 18:49:47.195779 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 18:49:47.195779 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 18:49:47.195779 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 18:49:47.195779 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 18:49:47.195779 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 18:49:47.195779 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 18:49:47.195779 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 18:49:47.195779 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 18:49:47.195779 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 18:49:47.195779 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 18:49:47.352663 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 18:49:47.622887 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 18:49:47.622887 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 18:49:47.625625 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 18:49:47.625625 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 18:49:47.625625 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 18:49:47.625625 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 18:49:47.625625 ignition[947]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 18:49:47.625625 ignition[947]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 18:49:47.625625 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 18:49:47.625625 ignition[947]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 18:49:47.641522 ignition[947]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 18:49:47.644573 ignition[947]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 18:49:47.646625 ignition[947]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 18:49:47.646625 ignition[947]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 18:49:47.646625 ignition[947]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 18:49:47.646625 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 18:49:47.646625 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 18:49:47.646625 ignition[947]: INFO : files: files passed Feb 13 18:49:47.646625 ignition[947]: INFO : Ignition finished successfully Feb 13 18:49:47.647450 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 18:49:47.657856 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 18:49:47.659302 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 18:49:47.662219 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 18:49:47.663059 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 18:49:47.666224 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 18:49:47.669248 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:49:47.669248 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:49:47.671869 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:49:47.671382 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 18:49:47.672932 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 18:49:47.678961 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 18:49:47.695775 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 18:49:47.696534 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 18:49:47.697849 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 18:49:47.699294 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 18:49:47.700593 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 18:49:47.701278 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 18:49:47.716404 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 18:49:47.726853 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 18:49:47.733988 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:49:47.734898 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:49:47.736392 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 18:49:47.737718 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 18:49:47.737825 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 18:49:47.739711 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 18:49:47.741193 systemd[1]: Stopped target basic.target - Basic System. Feb 13 18:49:47.742403 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 18:49:47.743661 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 18:49:47.745159 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 18:49:47.746685 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 18:49:47.748077 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 18:49:47.749543 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 18:49:47.751010 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 18:49:47.752291 systemd[1]: Stopped target swap.target - Swaps. Feb 13 18:49:47.753393 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 18:49:47.753497 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 18:49:47.755235 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:49:47.756667 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:49:47.758266 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 18:49:47.761695 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:49:47.762614 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 18:49:47.762731 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 18:49:47.764871 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 18:49:47.764979 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 18:49:47.766425 systemd[1]: Stopped target paths.target - Path Units. Feb 13 18:49:47.767667 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 18:49:47.772677 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:49:47.773609 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 18:49:47.775214 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 18:49:47.776353 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 18:49:47.776437 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 18:49:47.777555 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 18:49:47.777626 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 18:49:47.778757 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 18:49:47.778856 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 18:49:47.780359 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 18:49:47.780456 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 18:49:47.791859 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 18:49:47.793149 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 18:49:47.793819 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 18:49:47.793925 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:49:47.795359 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 18:49:47.795447 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 18:49:47.799452 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 18:49:47.800651 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 18:49:47.805064 ignition[1002]: INFO : Ignition 2.20.0 Feb 13 18:49:47.805064 ignition[1002]: INFO : Stage: umount Feb 13 18:49:47.806415 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:49:47.806415 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 18:49:47.806415 ignition[1002]: INFO : umount: umount passed Feb 13 18:49:47.806415 ignition[1002]: INFO : Ignition finished successfully Feb 13 18:49:47.805483 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 18:49:47.807715 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 18:49:47.807803 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 18:49:47.809024 systemd[1]: Stopped target network.target - Network. Feb 13 18:49:47.809954 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 18:49:47.810007 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 18:49:47.811300 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 18:49:47.811339 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 18:49:47.812566 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 18:49:47.812603 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 18:49:47.813894 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 18:49:47.813931 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 18:49:47.815317 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 18:49:47.817821 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 18:49:47.824719 systemd-networkd[765]: eth0: DHCPv6 lease lost Feb 13 18:49:47.826186 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 18:49:47.826304 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 18:49:47.827843 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 18:49:47.827873 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:49:47.838745 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 18:49:47.839398 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 18:49:47.839447 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 18:49:47.842275 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:49:47.843834 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 18:49:47.843935 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 18:49:47.847839 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 18:49:47.847902 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:49:47.848734 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 18:49:47.848771 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 18:49:47.850106 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 18:49:47.850144 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:49:47.852138 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 18:49:47.852287 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:49:47.853487 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 18:49:47.853569 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 18:49:47.855034 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 18:49:47.855089 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 18:49:47.856192 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 18:49:47.856224 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:49:47.857129 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 18:49:47.857172 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 18:49:47.859263 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 18:49:47.859306 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 18:49:47.861967 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 18:49:47.862018 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:49:47.873852 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 18:49:47.874621 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 18:49:47.874686 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:49:47.876349 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 18:49:47.876387 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 18:49:47.877844 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 18:49:47.877879 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:49:47.879496 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 18:49:47.879533 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:49:47.881310 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 18:49:47.881394 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 18:49:47.882806 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 18:49:47.882875 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 18:49:47.884688 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 18:49:47.885466 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 18:49:47.885521 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 18:49:47.887560 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 18:49:47.896173 systemd[1]: Switching root. Feb 13 18:49:47.926038 systemd-journald[237]: Journal stopped Feb 13 18:49:48.595706 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 13 18:49:48.595768 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 18:49:48.595780 kernel: SELinux: policy capability open_perms=1 Feb 13 18:49:48.595789 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 18:49:48.595798 kernel: SELinux: policy capability always_check_network=0 Feb 13 18:49:48.595808 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 18:49:48.595817 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 18:49:48.595826 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 18:49:48.595835 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 18:49:48.595844 systemd[1]: Successfully loaded SELinux policy in 31.118ms. Feb 13 18:49:48.595863 kernel: audit: type=1403 audit(1739472588.068:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 18:49:48.595873 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.084ms. Feb 13 18:49:48.595884 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 18:49:48.595895 systemd[1]: Detected virtualization kvm. Feb 13 18:49:48.595905 systemd[1]: Detected architecture arm64. Feb 13 18:49:48.595915 systemd[1]: Detected first boot. Feb 13 18:49:48.595925 systemd[1]: Initializing machine ID from VM UUID. Feb 13 18:49:48.595935 zram_generator::config[1047]: No configuration found. Feb 13 18:49:48.595947 systemd[1]: Populated /etc with preset unit settings. Feb 13 18:49:48.595964 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 18:49:48.595974 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 18:49:48.595988 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 18:49:48.595998 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 18:49:48.596009 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 18:49:48.596019 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 18:49:48.596029 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 18:49:48.596039 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 18:49:48.596051 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 18:49:48.596062 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 18:49:48.596072 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 18:49:48.596082 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:49:48.596093 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:49:48.596103 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 18:49:48.596113 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 18:49:48.596123 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 18:49:48.596135 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 18:49:48.596146 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 18:49:48.596156 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:49:48.596166 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 18:49:48.596176 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 18:49:48.596187 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 18:49:48.596198 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 18:49:48.596208 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:49:48.596220 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 18:49:48.596229 systemd[1]: Reached target slices.target - Slice Units. Feb 13 18:49:48.596239 systemd[1]: Reached target swap.target - Swaps. Feb 13 18:49:48.596256 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 18:49:48.596267 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 18:49:48.596278 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:49:48.596288 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 18:49:48.596298 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:49:48.596308 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 18:49:48.596320 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 18:49:48.596330 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 18:49:48.596340 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 18:49:48.596350 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 18:49:48.596360 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 18:49:48.596370 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 18:49:48.596381 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 18:49:48.596392 systemd[1]: Reached target machines.target - Containers. Feb 13 18:49:48.596402 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 18:49:48.596416 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:49:48.596426 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 18:49:48.596436 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 18:49:48.596446 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:49:48.596458 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 18:49:48.596468 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:49:48.596479 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 18:49:48.596488 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:49:48.596500 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 18:49:48.596511 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 18:49:48.596521 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 18:49:48.596531 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 18:49:48.596541 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 18:49:48.596550 kernel: fuse: init (API version 7.39) Feb 13 18:49:48.596559 kernel: loop: module loaded Feb 13 18:49:48.596569 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 18:49:48.596579 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 18:49:48.596590 kernel: ACPI: bus type drm_connector registered Feb 13 18:49:48.596600 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 18:49:48.596610 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 18:49:48.596620 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 18:49:48.596692 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 18:49:48.596721 systemd-journald[1111]: Collecting audit messages is disabled. Feb 13 18:49:48.596742 systemd[1]: Stopped verity-setup.service. Feb 13 18:49:48.596753 systemd-journald[1111]: Journal started Feb 13 18:49:48.596780 systemd-journald[1111]: Runtime Journal (/run/log/journal/cbef145d442941f39492dbe06aecd81f) is 5.9M, max 47.3M, 41.4M free. Feb 13 18:49:48.410158 systemd[1]: Queued start job for default target multi-user.target. Feb 13 18:49:48.432565 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 18:49:48.432907 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 18:49:48.598750 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 18:49:48.599422 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 18:49:48.600331 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 18:49:48.601242 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 18:49:48.602101 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 18:49:48.603013 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 18:49:48.603935 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 18:49:48.604867 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 18:49:48.605947 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:49:48.607152 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 18:49:48.607317 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 18:49:48.608491 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:49:48.608623 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:49:48.609707 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 18:49:48.609840 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 18:49:48.610868 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:49:48.611012 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:49:48.612316 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 18:49:48.612444 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 18:49:48.613498 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:49:48.613624 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:49:48.614864 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 18:49:48.615927 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 18:49:48.617099 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 18:49:48.629040 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 18:49:48.635742 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 18:49:48.637498 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 18:49:48.638370 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 18:49:48.638399 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 18:49:48.640018 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 18:49:48.642084 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 18:49:48.644824 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 18:49:48.645690 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:49:48.647213 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 18:49:48.648988 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 18:49:48.649857 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 18:49:48.653684 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 18:49:48.654722 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 18:49:48.657791 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 18:49:48.658145 systemd-journald[1111]: Time spent on flushing to /var/log/journal/cbef145d442941f39492dbe06aecd81f is 28.055ms for 856 entries. Feb 13 18:49:48.658145 systemd-journald[1111]: System Journal (/var/log/journal/cbef145d442941f39492dbe06aecd81f) is 8.0M, max 195.6M, 187.6M free. Feb 13 18:49:48.698611 systemd-journald[1111]: Received client request to flush runtime journal. Feb 13 18:49:48.698661 kernel: loop0: detected capacity change from 0 to 116784 Feb 13 18:49:48.660343 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 18:49:48.664817 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 18:49:48.668696 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:49:48.669761 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 18:49:48.670731 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 18:49:48.671771 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 18:49:48.675060 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 18:49:48.678316 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 18:49:48.681864 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 18:49:48.695326 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 18:49:48.696922 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:49:48.700053 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 18:49:48.705979 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Feb 13 18:49:48.706008 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Feb 13 18:49:48.709709 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 18:49:48.713722 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 18:49:48.727896 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 18:49:48.731696 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 18:49:48.732907 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 18:49:48.734305 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 18:49:48.746908 kernel: loop1: detected capacity change from 0 to 201592 Feb 13 18:49:48.752066 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 18:49:48.764804 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 18:49:48.774908 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Feb 13 18:49:48.774928 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Feb 13 18:49:48.778892 kernel: loop2: detected capacity change from 0 to 113552 Feb 13 18:49:48.778939 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:49:48.810656 kernel: loop3: detected capacity change from 0 to 116784 Feb 13 18:49:48.816657 kernel: loop4: detected capacity change from 0 to 201592 Feb 13 18:49:48.822669 kernel: loop5: detected capacity change from 0 to 113552 Feb 13 18:49:48.826199 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 18:49:48.827356 (sd-merge)[1187]: Merged extensions into '/usr'. Feb 13 18:49:48.830432 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 18:49:48.830444 systemd[1]: Reloading... Feb 13 18:49:48.869783 zram_generator::config[1210]: No configuration found. Feb 13 18:49:48.945431 ldconfig[1153]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 18:49:48.974010 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:49:49.010906 systemd[1]: Reloading finished in 180 ms. Feb 13 18:49:49.040167 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 18:49:49.041460 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 18:49:49.058870 systemd[1]: Starting ensure-sysext.service... Feb 13 18:49:49.061007 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 18:49:49.078210 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Feb 13 18:49:49.078225 systemd[1]: Reloading... Feb 13 18:49:49.079070 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 18:49:49.079292 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 18:49:49.079946 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 18:49:49.080136 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Feb 13 18:49:49.080180 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Feb 13 18:49:49.082608 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 18:49:49.082619 systemd-tmpfiles[1248]: Skipping /boot Feb 13 18:49:49.090619 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 18:49:49.090651 systemd-tmpfiles[1248]: Skipping /boot Feb 13 18:49:49.120690 zram_generator::config[1275]: No configuration found. Feb 13 18:49:49.194125 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:49:49.229309 systemd[1]: Reloading finished in 150 ms. Feb 13 18:49:49.245459 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 18:49:49.259079 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:49:49.266150 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 18:49:49.268381 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 18:49:49.270522 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 18:49:49.273961 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 18:49:49.279067 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:49:49.283022 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 18:49:49.285888 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:49:49.288034 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:49:49.291914 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:49:49.297903 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:49:49.298748 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:49:49.299536 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 18:49:49.301139 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:49:49.301331 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:49:49.302767 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:49:49.302917 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:49:49.305207 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:49:49.305414 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:49:49.314126 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:49:49.320000 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:49:49.321983 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:49:49.326564 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Feb 13 18:49:49.326934 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:49:49.327813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:49:49.328970 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 18:49:49.331234 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 18:49:49.333074 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 18:49:49.334521 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 18:49:49.335953 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:49:49.336066 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:49:49.337469 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:49:49.337603 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:49:49.339219 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:49:49.339349 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:49:49.340830 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 18:49:49.345795 augenrules[1350]: No rules Feb 13 18:49:49.347085 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 18:49:49.347271 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 18:49:49.351969 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:49:49.360820 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:49:49.363750 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 18:49:49.366062 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:49:49.372947 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:49:49.375149 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:49:49.375219 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 18:49:49.375450 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:49:49.376881 systemd[1]: Finished ensure-sysext.service. Feb 13 18:49:49.377858 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:49:49.378002 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:49:49.379304 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 18:49:49.379430 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 18:49:49.380531 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:49:49.380671 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:49:49.384046 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 18:49:49.386135 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:49:49.386279 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:49:49.399540 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 18:49:49.408876 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 18:49:49.409880 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 18:49:49.409963 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 18:49:49.412095 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 18:49:49.424658 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1373) Feb 13 18:49:49.452479 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 18:49:49.460827 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 18:49:49.485100 systemd-networkd[1395]: lo: Link UP Feb 13 18:49:49.485502 systemd-networkd[1395]: lo: Gained carrier Feb 13 18:49:49.486397 systemd-networkd[1395]: Enumeration completed Feb 13 18:49:49.486766 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 18:49:49.488329 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:49:49.488414 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 18:49:49.490230 systemd-networkd[1395]: eth0: Link UP Feb 13 18:49:49.490332 systemd-networkd[1395]: eth0: Gained carrier Feb 13 18:49:49.490395 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:49:49.491181 systemd-resolved[1314]: Positive Trust Anchors: Feb 13 18:49:49.491197 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 18:49:49.491227 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 18:49:49.493854 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 18:49:49.498177 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 18:49:49.500828 systemd-resolved[1314]: Defaulting to hostname 'linux'. Feb 13 18:49:49.504709 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 18:49:49.505691 systemd[1]: Reached target network.target - Network. Feb 13 18:49:49.506325 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:49:49.506707 systemd-networkd[1395]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 18:49:49.507489 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 18:49:49.508433 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 18:49:49.508602 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Feb 13 18:49:49.510759 systemd-timesyncd[1397]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 18:49:49.510878 systemd-timesyncd[1397]: Initial clock synchronization to Thu 2025-02-13 18:49:49.458949 UTC. Feb 13 18:49:49.532976 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:49:49.542978 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 18:49:49.553776 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 18:49:49.570668 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:49:49.576111 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 18:49:49.606040 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 18:49:49.607210 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:49:49.608067 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 18:49:49.608901 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 18:49:49.609777 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 18:49:49.610801 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 18:49:49.611663 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 18:49:49.612540 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 18:49:49.613458 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 18:49:49.613493 systemd[1]: Reached target paths.target - Path Units. Feb 13 18:49:49.614423 systemd[1]: Reached target timers.target - Timer Units. Feb 13 18:49:49.615988 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 18:49:49.617995 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 18:49:49.628550 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 18:49:49.630620 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 18:49:49.631857 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 18:49:49.632729 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 18:49:49.633430 systemd[1]: Reached target basic.target - Basic System. Feb 13 18:49:49.634174 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 18:49:49.634202 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 18:49:49.635071 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 18:49:49.636796 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 18:49:49.639821 lvm[1420]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 18:49:49.640868 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 18:49:49.642841 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 18:49:49.644851 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 18:49:49.647000 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 18:49:49.647850 jq[1423]: false Feb 13 18:49:49.649567 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 18:49:49.652093 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 18:49:49.657426 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 18:49:49.672162 extend-filesystems[1424]: Found loop3 Feb 13 18:49:49.672162 extend-filesystems[1424]: Found loop4 Feb 13 18:49:49.672162 extend-filesystems[1424]: Found loop5 Feb 13 18:49:49.672162 extend-filesystems[1424]: Found vda Feb 13 18:49:49.672162 extend-filesystems[1424]: Found vda1 Feb 13 18:49:49.672162 extend-filesystems[1424]: Found vda2 Feb 13 18:49:49.672162 extend-filesystems[1424]: Found vda3 Feb 13 18:49:49.672162 extend-filesystems[1424]: Found usr Feb 13 18:49:49.672162 extend-filesystems[1424]: Found vda4 Feb 13 18:49:49.672162 extend-filesystems[1424]: Found vda6 Feb 13 18:49:49.672162 extend-filesystems[1424]: Found vda7 Feb 13 18:49:49.672162 extend-filesystems[1424]: Found vda9 Feb 13 18:49:49.672162 extend-filesystems[1424]: Checking size of /dev/vda9 Feb 13 18:49:49.693101 extend-filesystems[1424]: Resized partition /dev/vda9 Feb 13 18:49:49.672510 dbus-daemon[1422]: [system] SELinux support is enabled Feb 13 18:49:49.674812 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 18:49:49.697400 extend-filesystems[1445]: resize2fs 1.47.1 (20-May-2024) Feb 13 18:49:49.681633 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 18:49:49.682087 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 18:49:49.699394 jq[1444]: true Feb 13 18:49:49.683342 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 18:49:49.687805 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 18:49:49.699796 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 18:49:49.689138 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 18:49:49.693715 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 18:49:49.697436 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 18:49:49.697598 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 18:49:49.697871 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 18:49:49.697998 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 18:49:49.711709 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1363) Feb 13 18:49:49.712422 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 18:49:49.712618 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 18:49:49.719702 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 18:49:49.739923 systemd-logind[1437]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 18:49:49.740661 systemd-logind[1437]: New seat seat0. Feb 13 18:49:49.741591 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 18:49:49.741918 extend-filesystems[1445]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 18:49:49.741918 extend-filesystems[1445]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 18:49:49.741918 extend-filesystems[1445]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 18:49:49.748070 extend-filesystems[1424]: Resized filesystem in /dev/vda9 Feb 13 18:49:49.751219 tar[1447]: linux-arm64/LICENSE Feb 13 18:49:49.751219 tar[1447]: linux-arm64/helm Feb 13 18:49:49.742769 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 18:49:49.751488 update_engine[1443]: I20250213 18:49:49.743307 1443 main.cc:92] Flatcar Update Engine starting Feb 13 18:49:49.744718 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 18:49:49.748376 (ntainerd)[1453]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 18:49:49.750382 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 18:49:49.750403 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 18:49:49.752610 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 18:49:49.753089 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 18:49:49.754577 jq[1448]: true Feb 13 18:49:49.755865 update_engine[1443]: I20250213 18:49:49.755810 1443 update_check_scheduler.cc:74] Next update check in 9m43s Feb 13 18:49:49.764081 systemd[1]: Started update-engine.service - Update Engine. Feb 13 18:49:49.767890 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 18:49:49.810233 bash[1479]: Updated "/home/core/.ssh/authorized_keys" Feb 13 18:49:49.811177 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 18:49:49.813375 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 18:49:49.822170 locksmithd[1464]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 18:49:49.934154 containerd[1453]: time="2025-02-13T18:49:49.934006480Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 18:49:49.960968 containerd[1453]: time="2025-02-13T18:49:49.960916280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:49:49.962548 containerd[1453]: time="2025-02-13T18:49:49.962511880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:49:49.962641 containerd[1453]: time="2025-02-13T18:49:49.962613720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 18:49:49.962729 containerd[1453]: time="2025-02-13T18:49:49.962714160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 18:49:49.962921 containerd[1453]: time="2025-02-13T18:49:49.962900800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 18:49:49.962995 containerd[1453]: time="2025-02-13T18:49:49.962981320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 18:49:49.963110 containerd[1453]: time="2025-02-13T18:49:49.963090640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:49:49.963163 containerd[1453]: time="2025-02-13T18:49:49.963151400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:49:49.963416 containerd[1453]: time="2025-02-13T18:49:49.963390840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:49:49.963482 containerd[1453]: time="2025-02-13T18:49:49.963470120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 18:49:49.963534 containerd[1453]: time="2025-02-13T18:49:49.963521160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:49:49.963590 containerd[1453]: time="2025-02-13T18:49:49.963577840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 18:49:49.963751 containerd[1453]: time="2025-02-13T18:49:49.963730520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:49:49.964016 containerd[1453]: time="2025-02-13T18:49:49.963994040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:49:49.964203 containerd[1453]: time="2025-02-13T18:49:49.964181280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:49:49.964273 containerd[1453]: time="2025-02-13T18:49:49.964257440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 18:49:49.964419 containerd[1453]: time="2025-02-13T18:49:49.964399600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 18:49:49.964524 containerd[1453]: time="2025-02-13T18:49:49.964508600Z" level=info msg="metadata content store policy set" policy=shared Feb 13 18:49:49.968439 containerd[1453]: time="2025-02-13T18:49:49.968408000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 18:49:49.968560 containerd[1453]: time="2025-02-13T18:49:49.968544160Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 18:49:49.968981 containerd[1453]: time="2025-02-13T18:49:49.968905000Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 18:49:49.969025 containerd[1453]: time="2025-02-13T18:49:49.968985640Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 18:49:49.969025 containerd[1453]: time="2025-02-13T18:49:49.969009400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 18:49:49.969352 containerd[1453]: time="2025-02-13T18:49:49.969326360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 18:49:49.969839 containerd[1453]: time="2025-02-13T18:49:49.969817400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 18:49:49.969953 containerd[1453]: time="2025-02-13T18:49:49.969934440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 18:49:49.969985 containerd[1453]: time="2025-02-13T18:49:49.969957040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 18:49:49.969985 containerd[1453]: time="2025-02-13T18:49:49.969973280Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 18:49:49.970020 containerd[1453]: time="2025-02-13T18:49:49.969989160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 18:49:49.970020 containerd[1453]: time="2025-02-13T18:49:49.970002400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 18:49:49.970020 containerd[1453]: time="2025-02-13T18:49:49.970014520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 18:49:49.970067 containerd[1453]: time="2025-02-13T18:49:49.970029280Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 18:49:49.970067 containerd[1453]: time="2025-02-13T18:49:49.970043920Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 18:49:49.970067 containerd[1453]: time="2025-02-13T18:49:49.970058600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 18:49:49.970118 containerd[1453]: time="2025-02-13T18:49:49.970070560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 18:49:49.970118 containerd[1453]: time="2025-02-13T18:49:49.970082080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 18:49:49.970118 containerd[1453]: time="2025-02-13T18:49:49.970101960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 18:49:49.970118 containerd[1453]: time="2025-02-13T18:49:49.970116720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 18:49:49.970185 containerd[1453]: time="2025-02-13T18:49:49.970129520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 18:49:49.970185 containerd[1453]: time="2025-02-13T18:49:49.970141800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 18:49:49.970185 containerd[1453]: time="2025-02-13T18:49:49.970152920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 18:49:49.970185 containerd[1453]: time="2025-02-13T18:49:49.970165280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 18:49:49.970185 containerd[1453]: time="2025-02-13T18:49:49.970176560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 18:49:49.970279 containerd[1453]: time="2025-02-13T18:49:49.970189440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 18:49:49.970279 containerd[1453]: time="2025-02-13T18:49:49.970202440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 18:49:49.970279 containerd[1453]: time="2025-02-13T18:49:49.970216040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 18:49:49.970279 containerd[1453]: time="2025-02-13T18:49:49.970227520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 18:49:49.970279 containerd[1453]: time="2025-02-13T18:49:49.970239400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 18:49:49.970279 containerd[1453]: time="2025-02-13T18:49:49.970262760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 18:49:49.970279 containerd[1453]: time="2025-02-13T18:49:49.970279160Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 18:49:49.970391 containerd[1453]: time="2025-02-13T18:49:49.970308200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 18:49:49.970391 containerd[1453]: time="2025-02-13T18:49:49.970322480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 18:49:49.970391 containerd[1453]: time="2025-02-13T18:49:49.970333280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 18:49:49.970579 containerd[1453]: time="2025-02-13T18:49:49.970563160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 18:49:49.970600 containerd[1453]: time="2025-02-13T18:49:49.970586320Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 18:49:49.970600 containerd[1453]: time="2025-02-13T18:49:49.970596680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 18:49:49.970699 containerd[1453]: time="2025-02-13T18:49:49.970683960Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 18:49:49.970723 containerd[1453]: time="2025-02-13T18:49:49.970699800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 18:49:49.970723 containerd[1453]: time="2025-02-13T18:49:49.970713320Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 18:49:49.970756 containerd[1453]: time="2025-02-13T18:49:49.970725280Z" level=info msg="NRI interface is disabled by configuration." Feb 13 18:49:49.970756 containerd[1453]: time="2025-02-13T18:49:49.970736120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 18:49:49.971204 containerd[1453]: time="2025-02-13T18:49:49.971149920Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 18:49:49.971204 containerd[1453]: time="2025-02-13T18:49:49.971202960Z" level=info msg="Connect containerd service" Feb 13 18:49:49.971348 containerd[1453]: time="2025-02-13T18:49:49.971236760Z" level=info msg="using legacy CRI server" Feb 13 18:49:49.971348 containerd[1453]: time="2025-02-13T18:49:49.971251160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 18:49:49.971579 containerd[1453]: time="2025-02-13T18:49:49.971563240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 18:49:49.972331 containerd[1453]: time="2025-02-13T18:49:49.972302800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 18:49:49.972566 containerd[1453]: time="2025-02-13T18:49:49.972532280Z" level=info msg="Start subscribing containerd event" Feb 13 18:49:49.972602 containerd[1453]: time="2025-02-13T18:49:49.972576400Z" level=info msg="Start recovering state" Feb 13 18:49:49.973280 containerd[1453]: time="2025-02-13T18:49:49.973258560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 18:49:49.973332 containerd[1453]: time="2025-02-13T18:49:49.973303760Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 18:49:49.973510 containerd[1453]: time="2025-02-13T18:49:49.973487080Z" level=info msg="Start event monitor" Feb 13 18:49:49.973510 containerd[1453]: time="2025-02-13T18:49:49.973508760Z" level=info msg="Start snapshots syncer" Feb 13 18:49:49.973556 containerd[1453]: time="2025-02-13T18:49:49.973518240Z" level=info msg="Start cni network conf syncer for default" Feb 13 18:49:49.973556 containerd[1453]: time="2025-02-13T18:49:49.973525200Z" level=info msg="Start streaming server" Feb 13 18:49:49.974668 containerd[1453]: time="2025-02-13T18:49:49.973661800Z" level=info msg="containerd successfully booted in 0.041336s" Feb 13 18:49:49.973756 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 18:49:50.133909 tar[1447]: linux-arm64/README.md Feb 13 18:49:50.147820 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 18:49:50.501600 sshd_keygen[1438]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 18:49:50.519506 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 18:49:50.531880 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 18:49:50.536807 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 18:49:50.536982 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 18:49:50.539272 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 18:49:50.552679 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 18:49:50.554998 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 18:49:50.558755 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 18:49:50.559767 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 18:49:51.367791 systemd-networkd[1395]: eth0: Gained IPv6LL Feb 13 18:49:51.369865 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 18:49:51.371781 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 18:49:51.383929 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 18:49:51.385968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:49:51.387779 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 18:49:51.401367 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 18:49:51.401586 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 18:49:51.402930 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 18:49:51.405213 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 18:49:51.903675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:49:51.904996 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 18:49:51.907309 (kubelet)[1536]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 18:49:51.907984 systemd[1]: Startup finished in 520ms (kernel) + 4.370s (initrd) + 3.872s (userspace) = 8.763s. Feb 13 18:49:51.918917 agetty[1512]: failed to open credentials directory Feb 13 18:49:51.918958 agetty[1513]: failed to open credentials directory Feb 13 18:49:52.309278 kubelet[1536]: E0213 18:49:52.309161 1536 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 18:49:52.311389 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 18:49:52.311548 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 18:49:56.347174 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 18:49:56.348258 systemd[1]: Started sshd@0-10.0.0.31:22-10.0.0.1:37510.service - OpenSSH per-connection server daemon (10.0.0.1:37510). Feb 13 18:49:56.422421 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 37510 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:49:56.424302 sshd-session[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:49:56.432271 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 18:49:56.439904 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 18:49:56.441889 systemd-logind[1437]: New session 1 of user core. Feb 13 18:49:56.450385 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 18:49:56.454538 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 18:49:56.460107 (systemd)[1553]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 18:49:56.544478 systemd[1553]: Queued start job for default target default.target. Feb 13 18:49:56.559994 systemd[1553]: Created slice app.slice - User Application Slice. Feb 13 18:49:56.560047 systemd[1553]: Reached target paths.target - Paths. Feb 13 18:49:56.560060 systemd[1553]: Reached target timers.target - Timers. Feb 13 18:49:56.561376 systemd[1553]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 18:49:56.572253 systemd[1553]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 18:49:56.572373 systemd[1553]: Reached target sockets.target - Sockets. Feb 13 18:49:56.572389 systemd[1553]: Reached target basic.target - Basic System. Feb 13 18:49:56.572425 systemd[1553]: Reached target default.target - Main User Target. Feb 13 18:49:56.572452 systemd[1553]: Startup finished in 105ms. Feb 13 18:49:56.572741 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 18:49:56.574310 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 18:49:56.638826 systemd[1]: Started sshd@1-10.0.0.31:22-10.0.0.1:37514.service - OpenSSH per-connection server daemon (10.0.0.1:37514). Feb 13 18:49:56.678162 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 37514 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:49:56.679531 sshd-session[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:49:56.683662 systemd-logind[1437]: New session 2 of user core. Feb 13 18:49:56.693842 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 18:49:56.744700 sshd[1566]: Connection closed by 10.0.0.1 port 37514 Feb 13 18:49:56.745181 sshd-session[1564]: pam_unix(sshd:session): session closed for user core Feb 13 18:49:56.760962 systemd[1]: sshd@1-10.0.0.31:22-10.0.0.1:37514.service: Deactivated successfully. Feb 13 18:49:56.762542 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 18:49:56.763771 systemd-logind[1437]: Session 2 logged out. Waiting for processes to exit. Feb 13 18:49:56.778909 systemd[1]: Started sshd@2-10.0.0.31:22-10.0.0.1:37522.service - OpenSSH per-connection server daemon (10.0.0.1:37522). Feb 13 18:49:56.779737 systemd-logind[1437]: Removed session 2. Feb 13 18:49:56.813045 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 37522 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:49:56.814205 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:49:56.818193 systemd-logind[1437]: New session 3 of user core. Feb 13 18:49:56.828770 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 18:49:56.876530 sshd[1573]: Connection closed by 10.0.0.1 port 37522 Feb 13 18:49:56.876955 sshd-session[1571]: pam_unix(sshd:session): session closed for user core Feb 13 18:49:56.896977 systemd[1]: sshd@2-10.0.0.31:22-10.0.0.1:37522.service: Deactivated successfully. Feb 13 18:49:56.898479 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 18:49:56.900812 systemd-logind[1437]: Session 3 logged out. Waiting for processes to exit. Feb 13 18:49:56.910873 systemd[1]: Started sshd@3-10.0.0.31:22-10.0.0.1:37530.service - OpenSSH per-connection server daemon (10.0.0.1:37530). Feb 13 18:49:56.911677 systemd-logind[1437]: Removed session 3. Feb 13 18:49:56.944432 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 37530 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:49:56.945542 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:49:56.949668 systemd-logind[1437]: New session 4 of user core. Feb 13 18:49:56.955854 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 18:49:57.007914 sshd[1580]: Connection closed by 10.0.0.1 port 37530 Feb 13 18:49:57.008239 sshd-session[1578]: pam_unix(sshd:session): session closed for user core Feb 13 18:49:57.018032 systemd[1]: sshd@3-10.0.0.31:22-10.0.0.1:37530.service: Deactivated successfully. Feb 13 18:49:57.019481 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 18:49:57.020720 systemd-logind[1437]: Session 4 logged out. Waiting for processes to exit. Feb 13 18:49:57.021787 systemd[1]: Started sshd@4-10.0.0.31:22-10.0.0.1:37538.service - OpenSSH per-connection server daemon (10.0.0.1:37538). Feb 13 18:49:57.022458 systemd-logind[1437]: Removed session 4. Feb 13 18:49:57.058574 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 37538 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:49:57.059715 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:49:57.063731 systemd-logind[1437]: New session 5 of user core. Feb 13 18:49:57.080793 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 18:49:57.138657 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 18:49:57.141497 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 18:49:57.456890 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 18:49:57.457046 (dockerd)[1608]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 18:49:57.699131 dockerd[1608]: time="2025-02-13T18:49:57.699073145Z" level=info msg="Starting up" Feb 13 18:49:57.865134 dockerd[1608]: time="2025-02-13T18:49:57.864981495Z" level=info msg="Loading containers: start." Feb 13 18:49:58.010657 kernel: Initializing XFRM netlink socket Feb 13 18:49:58.073732 systemd-networkd[1395]: docker0: Link UP Feb 13 18:49:58.106822 dockerd[1608]: time="2025-02-13T18:49:58.106725021Z" level=info msg="Loading containers: done." Feb 13 18:49:58.121267 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck601185082-merged.mount: Deactivated successfully. Feb 13 18:49:58.122977 dockerd[1608]: time="2025-02-13T18:49:58.122938875Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 18:49:58.123049 dockerd[1608]: time="2025-02-13T18:49:58.123027717Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 18:49:58.123208 dockerd[1608]: time="2025-02-13T18:49:58.123188839Z" level=info msg="Daemon has completed initialization" Feb 13 18:49:58.150683 dockerd[1608]: time="2025-02-13T18:49:58.150548153Z" level=info msg="API listen on /run/docker.sock" Feb 13 18:49:58.150726 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 18:49:58.633423 containerd[1453]: time="2025-02-13T18:49:58.633385379Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 18:49:59.395686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1340233987.mount: Deactivated successfully. Feb 13 18:50:00.949636 containerd[1453]: time="2025-02-13T18:50:00.949571298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:00.950037 containerd[1453]: time="2025-02-13T18:50:00.949994457Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=26218238" Feb 13 18:50:00.950911 containerd[1453]: time="2025-02-13T18:50:00.950876953Z" level=info msg="ImageCreate event name:\"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:00.953779 containerd[1453]: time="2025-02-13T18:50:00.953728095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:00.954955 containerd[1453]: time="2025-02-13T18:50:00.954837844Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"26215036\" in 2.321409795s" Feb 13 18:50:00.954955 containerd[1453]: time="2025-02-13T18:50:00.954870308Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\"" Feb 13 18:50:00.955787 containerd[1453]: time="2025-02-13T18:50:00.955762828Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 18:50:02.561851 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 18:50:02.572816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:50:02.673760 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:50:02.677427 (kubelet)[1868]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 18:50:02.756282 kubelet[1868]: E0213 18:50:02.756199 1868 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 18:50:02.759463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 18:50:02.759728 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 18:50:03.028746 containerd[1453]: time="2025-02-13T18:50:03.028408253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:03.029481 containerd[1453]: time="2025-02-13T18:50:03.029448786Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=22528147" Feb 13 18:50:03.030713 containerd[1453]: time="2025-02-13T18:50:03.030676625Z" level=info msg="ImageCreate event name:\"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:03.033381 containerd[1453]: time="2025-02-13T18:50:03.033343063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:03.034081 containerd[1453]: time="2025-02-13T18:50:03.034026922Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"23968941\" in 2.078233183s" Feb 13 18:50:03.034081 containerd[1453]: time="2025-02-13T18:50:03.034055610Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\"" Feb 13 18:50:03.034747 containerd[1453]: time="2025-02-13T18:50:03.034565867Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 18:50:04.461548 containerd[1453]: time="2025-02-13T18:50:04.461333120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:04.462407 containerd[1453]: time="2025-02-13T18:50:04.462172122Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=17480802" Feb 13 18:50:04.463241 containerd[1453]: time="2025-02-13T18:50:04.463204731Z" level=info msg="ImageCreate event name:\"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:04.465946 containerd[1453]: time="2025-02-13T18:50:04.465919222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:04.467050 containerd[1453]: time="2025-02-13T18:50:04.466994708Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"18921614\" in 1.432399473s" Feb 13 18:50:04.467050 containerd[1453]: time="2025-02-13T18:50:04.467026597Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\"" Feb 13 18:50:04.467729 containerd[1453]: time="2025-02-13T18:50:04.467553231Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 18:50:05.401060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2589431260.mount: Deactivated successfully. Feb 13 18:50:05.724544 containerd[1453]: time="2025-02-13T18:50:05.724420941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:05.725549 containerd[1453]: time="2025-02-13T18:50:05.725507472Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363384" Feb 13 18:50:05.726554 containerd[1453]: time="2025-02-13T18:50:05.726498366Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:05.728595 containerd[1453]: time="2025-02-13T18:50:05.728560286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:05.729260 containerd[1453]: time="2025-02-13T18:50:05.729131027Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.26155274s" Feb 13 18:50:05.729260 containerd[1453]: time="2025-02-13T18:50:05.729155206Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 18:50:05.729644 containerd[1453]: time="2025-02-13T18:50:05.729612527Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 18:50:06.488216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1015842279.mount: Deactivated successfully. Feb 13 18:50:07.515190 containerd[1453]: time="2025-02-13T18:50:07.515135893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:07.516189 containerd[1453]: time="2025-02-13T18:50:07.515533587Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Feb 13 18:50:07.517089 containerd[1453]: time="2025-02-13T18:50:07.517063324Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:07.519936 containerd[1453]: time="2025-02-13T18:50:07.519900908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:07.522143 containerd[1453]: time="2025-02-13T18:50:07.521601611Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.79194592s" Feb 13 18:50:07.522143 containerd[1453]: time="2025-02-13T18:50:07.521653056Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Feb 13 18:50:07.522534 containerd[1453]: time="2025-02-13T18:50:07.522510803Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 18:50:07.976958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3302853993.mount: Deactivated successfully. Feb 13 18:50:07.981754 containerd[1453]: time="2025-02-13T18:50:07.981718150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:07.982204 containerd[1453]: time="2025-02-13T18:50:07.982163133Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 18:50:07.983284 containerd[1453]: time="2025-02-13T18:50:07.983251605Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:07.985318 containerd[1453]: time="2025-02-13T18:50:07.985268257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:07.986203 containerd[1453]: time="2025-02-13T18:50:07.986129881Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 463.589458ms" Feb 13 18:50:07.986203 containerd[1453]: time="2025-02-13T18:50:07.986158822Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 18:50:07.986637 containerd[1453]: time="2025-02-13T18:50:07.986611359Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 18:50:08.632160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1280231869.mount: Deactivated successfully. Feb 13 18:50:12.055096 containerd[1453]: time="2025-02-13T18:50:12.055040339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:12.129505 containerd[1453]: time="2025-02-13T18:50:12.129411087Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431" Feb 13 18:50:12.130604 containerd[1453]: time="2025-02-13T18:50:12.130548018Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:12.133660 containerd[1453]: time="2025-02-13T18:50:12.133603410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:12.135014 containerd[1453]: time="2025-02-13T18:50:12.134939632Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.148285659s" Feb 13 18:50:12.135014 containerd[1453]: time="2025-02-13T18:50:12.134973861Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Feb 13 18:50:12.823148 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 18:50:12.833004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:50:12.926614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:50:12.930774 (kubelet)[2034]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 18:50:12.964337 kubelet[2034]: E0213 18:50:12.964281 2034 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 18:50:12.966134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 18:50:12.966250 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 18:50:16.899093 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:50:16.912838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:50:16.935666 systemd[1]: Reloading requested from client PID 2050 ('systemctl') (unit session-5.scope)... Feb 13 18:50:16.935683 systemd[1]: Reloading... Feb 13 18:50:16.989688 zram_generator::config[2085]: No configuration found. Feb 13 18:50:17.163289 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:50:17.214261 systemd[1]: Reloading finished in 278 ms. Feb 13 18:50:17.265260 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:50:17.267659 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 18:50:17.267838 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:50:17.269239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:50:17.364056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:50:17.368786 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 18:50:17.404017 kubelet[2136]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 18:50:17.404017 kubelet[2136]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 18:50:17.404017 kubelet[2136]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 18:50:17.404371 kubelet[2136]: I0213 18:50:17.404072 2136 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 18:50:18.653001 kubelet[2136]: I0213 18:50:18.652954 2136 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 18:50:18.653001 kubelet[2136]: I0213 18:50:18.652987 2136 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 18:50:18.653685 kubelet[2136]: I0213 18:50:18.653656 2136 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 18:50:18.695511 kubelet[2136]: E0213 18:50:18.695475 2136 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:50:18.696397 kubelet[2136]: I0213 18:50:18.696314 2136 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 18:50:18.702181 kubelet[2136]: E0213 18:50:18.702144 2136 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 18:50:18.702181 kubelet[2136]: I0213 18:50:18.702182 2136 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 18:50:18.704889 kubelet[2136]: I0213 18:50:18.704868 2136 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 18:50:18.706092 kubelet[2136]: I0213 18:50:18.706042 2136 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 18:50:18.706257 kubelet[2136]: I0213 18:50:18.706087 2136 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 18:50:18.706340 kubelet[2136]: I0213 18:50:18.706320 2136 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 18:50:18.706340 kubelet[2136]: I0213 18:50:18.706330 2136 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 18:50:18.706545 kubelet[2136]: I0213 18:50:18.706520 2136 state_mem.go:36] "Initialized new in-memory state store" Feb 13 18:50:18.708924 kubelet[2136]: I0213 18:50:18.708904 2136 kubelet.go:446] "Attempting to sync node with API server" Feb 13 18:50:18.708981 kubelet[2136]: I0213 18:50:18.708929 2136 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 18:50:18.708981 kubelet[2136]: I0213 18:50:18.708951 2136 kubelet.go:352] "Adding apiserver pod source" Feb 13 18:50:18.708981 kubelet[2136]: I0213 18:50:18.708960 2136 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 18:50:18.711244 kubelet[2136]: W0213 18:50:18.711200 2136 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 18:50:18.711400 kubelet[2136]: E0213 18:50:18.711363 2136 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:50:18.711559 kubelet[2136]: W0213 18:50:18.711527 2136 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 18:50:18.711592 kubelet[2136]: E0213 18:50:18.711571 2136 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:50:18.713764 kubelet[2136]: I0213 18:50:18.713724 2136 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 18:50:18.715512 kubelet[2136]: I0213 18:50:18.715461 2136 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 18:50:18.715654 kubelet[2136]: W0213 18:50:18.715643 2136 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 18:50:18.717275 kubelet[2136]: I0213 18:50:18.717254 2136 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 18:50:18.717340 kubelet[2136]: I0213 18:50:18.717286 2136 server.go:1287] "Started kubelet" Feb 13 18:50:18.719574 kubelet[2136]: I0213 18:50:18.717523 2136 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 18:50:18.719574 kubelet[2136]: I0213 18:50:18.718104 2136 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 18:50:18.719574 kubelet[2136]: I0213 18:50:18.718369 2136 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 18:50:18.719574 kubelet[2136]: I0213 18:50:18.718466 2136 server.go:490] "Adding debug handlers to kubelet server" Feb 13 18:50:18.719574 kubelet[2136]: I0213 18:50:18.719431 2136 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 18:50:18.719574 kubelet[2136]: I0213 18:50:18.719502 2136 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 18:50:18.719574 kubelet[2136]: I0213 18:50:18.719569 2136 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 18:50:18.720892 kubelet[2136]: I0213 18:50:18.720844 2136 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 18:50:18.720963 kubelet[2136]: I0213 18:50:18.720934 2136 reconciler.go:26] "Reconciler: start to sync state" Feb 13 18:50:18.721210 kubelet[2136]: E0213 18:50:18.721174 2136 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:50:18.721304 kubelet[2136]: E0213 18:50:18.721280 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="200ms" Feb 13 18:50:18.721416 kubelet[2136]: W0213 18:50:18.721376 2136 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 18:50:18.721449 kubelet[2136]: E0213 18:50:18.721423 2136 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:50:18.724910 kubelet[2136]: E0213 18:50:18.724871 2136 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 18:50:18.725209 kubelet[2136]: I0213 18:50:18.725191 2136 factory.go:221] Registration of the systemd container factory successfully Feb 13 18:50:18.725293 kubelet[2136]: I0213 18:50:18.725276 2136 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 18:50:18.726175 kubelet[2136]: E0213 18:50:18.725577 2136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.31:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.31:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823d9216f294df0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 18:50:18.717269488 +0000 UTC m=+1.345293000,LastTimestamp:2025-02-13 18:50:18.717269488 +0000 UTC m=+1.345293000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 18:50:18.726297 kubelet[2136]: I0213 18:50:18.726227 2136 factory.go:221] Registration of the containerd container factory successfully Feb 13 18:50:18.736662 kubelet[2136]: I0213 18:50:18.736607 2136 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 18:50:18.737657 kubelet[2136]: I0213 18:50:18.737554 2136 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 18:50:18.737657 kubelet[2136]: I0213 18:50:18.737576 2136 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 18:50:18.737657 kubelet[2136]: I0213 18:50:18.737596 2136 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 18:50:18.737657 kubelet[2136]: I0213 18:50:18.737602 2136 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 18:50:18.737657 kubelet[2136]: E0213 18:50:18.737649 2136 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 18:50:18.739941 kubelet[2136]: W0213 18:50:18.739908 2136 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 18:50:18.740015 kubelet[2136]: E0213 18:50:18.739959 2136 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:50:18.740614 kubelet[2136]: I0213 18:50:18.740376 2136 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 18:50:18.740614 kubelet[2136]: I0213 18:50:18.740393 2136 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 18:50:18.740614 kubelet[2136]: I0213 18:50:18.740410 2136 state_mem.go:36] "Initialized new in-memory state store" Feb 13 18:50:18.807660 kubelet[2136]: I0213 18:50:18.807605 2136 policy_none.go:49] "None policy: Start" Feb 13 18:50:18.807809 kubelet[2136]: I0213 18:50:18.807796 2136 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 18:50:18.807879 kubelet[2136]: I0213 18:50:18.807869 2136 state_mem.go:35] "Initializing new in-memory state store" Feb 13 18:50:18.812883 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 18:50:18.821510 kubelet[2136]: E0213 18:50:18.821463 2136 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:50:18.826067 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 18:50:18.830307 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 18:50:18.838182 kubelet[2136]: E0213 18:50:18.838152 2136 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 18:50:18.840460 kubelet[2136]: I0213 18:50:18.840440 2136 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 18:50:18.841369 kubelet[2136]: I0213 18:50:18.840746 2136 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 18:50:18.841369 kubelet[2136]: I0213 18:50:18.840764 2136 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 18:50:18.841369 kubelet[2136]: I0213 18:50:18.841290 2136 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 18:50:18.842302 kubelet[2136]: E0213 18:50:18.842280 2136 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 18:50:18.842436 kubelet[2136]: E0213 18:50:18.842421 2136 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 18:50:18.921934 kubelet[2136]: E0213 18:50:18.921829 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="400ms" Feb 13 18:50:18.941914 kubelet[2136]: I0213 18:50:18.941892 2136 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 18:50:18.942323 kubelet[2136]: E0213 18:50:18.942295 2136 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Feb 13 18:50:19.045989 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice. Feb 13 18:50:19.053970 kubelet[2136]: E0213 18:50:19.053877 2136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.31:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.31:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823d9216f294df0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 18:50:18.717269488 +0000 UTC m=+1.345293000,LastTimestamp:2025-02-13 18:50:18.717269488 +0000 UTC m=+1.345293000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 18:50:19.057392 kubelet[2136]: E0213 18:50:19.057368 2136 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 18:50:19.059546 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice. Feb 13 18:50:19.069703 kubelet[2136]: E0213 18:50:19.069676 2136 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 18:50:19.072613 systemd[1]: Created slice kubepods-burstable-pode1984cadcb53beed1cf699d6d7941543.slice - libcontainer container kubepods-burstable-pode1984cadcb53beed1cf699d6d7941543.slice. Feb 13 18:50:19.073964 kubelet[2136]: E0213 18:50:19.073944 2136 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 18:50:19.123432 kubelet[2136]: I0213 18:50:19.123250 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 18:50:19.123432 kubelet[2136]: I0213 18:50:19.123288 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1984cadcb53beed1cf699d6d7941543-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1984cadcb53beed1cf699d6d7941543\") " pod="kube-system/kube-apiserver-localhost" Feb 13 18:50:19.123432 kubelet[2136]: I0213 18:50:19.123309 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1984cadcb53beed1cf699d6d7941543-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1984cadcb53beed1cf699d6d7941543\") " pod="kube-system/kube-apiserver-localhost" Feb 13 18:50:19.123432 kubelet[2136]: I0213 18:50:19.123325 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1984cadcb53beed1cf699d6d7941543-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e1984cadcb53beed1cf699d6d7941543\") " pod="kube-system/kube-apiserver-localhost" Feb 13 18:50:19.123432 kubelet[2136]: I0213 18:50:19.123343 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:50:19.123623 kubelet[2136]: I0213 18:50:19.123359 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:50:19.123623 kubelet[2136]: I0213 18:50:19.123373 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:50:19.123623 kubelet[2136]: I0213 18:50:19.123388 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:50:19.123623 kubelet[2136]: I0213 18:50:19.123405 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:50:19.144329 kubelet[2136]: I0213 18:50:19.144300 2136 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 18:50:19.144713 kubelet[2136]: E0213 18:50:19.144687 2136 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Feb 13 18:50:19.323268 kubelet[2136]: E0213 18:50:19.323207 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="800ms" Feb 13 18:50:19.358836 kubelet[2136]: E0213 18:50:19.358797 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:19.359650 containerd[1453]: time="2025-02-13T18:50:19.359516072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}" Feb 13 18:50:19.370867 kubelet[2136]: E0213 18:50:19.370810 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:19.372215 containerd[1453]: time="2025-02-13T18:50:19.372178768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}" Feb 13 18:50:19.374658 kubelet[2136]: E0213 18:50:19.374444 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:19.374808 containerd[1453]: time="2025-02-13T18:50:19.374763700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e1984cadcb53beed1cf699d6d7941543,Namespace:kube-system,Attempt:0,}" Feb 13 18:50:19.545850 kubelet[2136]: I0213 18:50:19.545815 2136 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 18:50:19.546115 kubelet[2136]: E0213 18:50:19.546094 2136 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Feb 13 18:50:19.776818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3154813430.mount: Deactivated successfully. Feb 13 18:50:19.781031 containerd[1453]: time="2025-02-13T18:50:19.780950399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:50:19.783699 containerd[1453]: time="2025-02-13T18:50:19.783622879Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 18:50:19.784370 containerd[1453]: time="2025-02-13T18:50:19.784330704Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:50:19.785742 containerd[1453]: time="2025-02-13T18:50:19.785685802Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:50:19.786433 containerd[1453]: time="2025-02-13T18:50:19.786398346Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 18:50:19.787007 containerd[1453]: time="2025-02-13T18:50:19.786979388Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:50:19.788841 containerd[1453]: time="2025-02-13T18:50:19.787396132Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 18:50:19.791686 containerd[1453]: time="2025-02-13T18:50:19.791649159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:50:19.792681 containerd[1453]: time="2025-02-13T18:50:19.792656664Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 417.834851ms" Feb 13 18:50:19.793311 containerd[1453]: time="2025-02-13T18:50:19.793260062Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 433.66532ms" Feb 13 18:50:19.794048 containerd[1453]: time="2025-02-13T18:50:19.793927693Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 421.678573ms" Feb 13 18:50:19.938011 containerd[1453]: time="2025-02-13T18:50:19.937910676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:50:19.938230 containerd[1453]: time="2025-02-13T18:50:19.937978947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:50:19.938230 containerd[1453]: time="2025-02-13T18:50:19.937991026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:50:19.938230 containerd[1453]: time="2025-02-13T18:50:19.938083973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:50:19.938401 containerd[1453]: time="2025-02-13T18:50:19.938217475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:50:19.938401 containerd[1453]: time="2025-02-13T18:50:19.938264269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:50:19.938401 containerd[1453]: time="2025-02-13T18:50:19.938279707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:50:19.938401 containerd[1453]: time="2025-02-13T18:50:19.938359256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:50:19.939745 containerd[1453]: time="2025-02-13T18:50:19.939483705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:50:19.941080 containerd[1453]: time="2025-02-13T18:50:19.940979064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:50:19.941080 containerd[1453]: time="2025-02-13T18:50:19.941001181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:50:19.941182 containerd[1453]: time="2025-02-13T18:50:19.941132763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:50:19.956803 systemd[1]: Started cri-containerd-24c33c1dfe0275246f866ce88feb06891f46e1cc8c88fcd86f1daf586ddbb9c5.scope - libcontainer container 24c33c1dfe0275246f866ce88feb06891f46e1cc8c88fcd86f1daf586ddbb9c5. Feb 13 18:50:19.960794 systemd[1]: Started cri-containerd-376df1e02cbe0206caa9124470a18c67ff275b92da1829163817ebe4f58e586f.scope - libcontainer container 376df1e02cbe0206caa9124470a18c67ff275b92da1829163817ebe4f58e586f. Feb 13 18:50:19.962569 systemd[1]: Started cri-containerd-ea85a22d278eb4c6994c94986c34e540ef4154684adeadbc2f1ec9a7a03bbca4.scope - libcontainer container ea85a22d278eb4c6994c94986c34e540ef4154684adeadbc2f1ec9a7a03bbca4. Feb 13 18:50:19.965781 kubelet[2136]: W0213 18:50:19.965699 2136 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 18:50:19.965781 kubelet[2136]: E0213 18:50:19.965769 2136 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:50:19.988877 containerd[1453]: time="2025-02-13T18:50:19.988841863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"24c33c1dfe0275246f866ce88feb06891f46e1cc8c88fcd86f1daf586ddbb9c5\"" Feb 13 18:50:19.990221 kubelet[2136]: E0213 18:50:19.990196 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:19.992578 containerd[1453]: time="2025-02-13T18:50:19.992545564Z" level=info msg="CreateContainer within sandbox \"24c33c1dfe0275246f866ce88feb06891f46e1cc8c88fcd86f1daf586ddbb9c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 18:50:19.993095 containerd[1453]: time="2025-02-13T18:50:19.993058295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e1984cadcb53beed1cf699d6d7941543,Namespace:kube-system,Attempt:0,} returns sandbox id \"376df1e02cbe0206caa9124470a18c67ff275b92da1829163817ebe4f58e586f\"" Feb 13 18:50:19.993745 kubelet[2136]: E0213 18:50:19.993650 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:19.996040 containerd[1453]: time="2025-02-13T18:50:19.995993580Z" level=info msg="CreateContainer within sandbox \"376df1e02cbe0206caa9124470a18c67ff275b92da1829163817ebe4f58e586f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 18:50:19.997872 containerd[1453]: time="2025-02-13T18:50:19.997771581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea85a22d278eb4c6994c94986c34e540ef4154684adeadbc2f1ec9a7a03bbca4\"" Feb 13 18:50:19.998490 kubelet[2136]: E0213 18:50:19.998439 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:19.999811 containerd[1453]: time="2025-02-13T18:50:19.999784710Z" level=info msg="CreateContainer within sandbox \"ea85a22d278eb4c6994c94986c34e540ef4154684adeadbc2f1ec9a7a03bbca4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 18:50:20.009197 containerd[1453]: time="2025-02-13T18:50:20.009160735Z" level=info msg="CreateContainer within sandbox \"24c33c1dfe0275246f866ce88feb06891f46e1cc8c88fcd86f1daf586ddbb9c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bd462eac8023336e60bd4ff2b67cd8a66ebc1585591c62fb6a04830a3c5bddb6\"" Feb 13 18:50:20.009723 containerd[1453]: time="2025-02-13T18:50:20.009667515Z" level=info msg="StartContainer for \"bd462eac8023336e60bd4ff2b67cd8a66ebc1585591c62fb6a04830a3c5bddb6\"" Feb 13 18:50:20.013626 containerd[1453]: time="2025-02-13T18:50:20.013586014Z" level=info msg="CreateContainer within sandbox \"376df1e02cbe0206caa9124470a18c67ff275b92da1829163817ebe4f58e586f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"57039663ebd6668fe8d54d01dc93ebdce38929987d6f54f2aee766a8d74f6be3\"" Feb 13 18:50:20.014018 containerd[1453]: time="2025-02-13T18:50:20.013996885Z" level=info msg="StartContainer for \"57039663ebd6668fe8d54d01dc93ebdce38929987d6f54f2aee766a8d74f6be3\"" Feb 13 18:50:20.016091 containerd[1453]: time="2025-02-13T18:50:20.015986611Z" level=info msg="CreateContainer within sandbox \"ea85a22d278eb4c6994c94986c34e540ef4154684adeadbc2f1ec9a7a03bbca4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a2c83d6d149a3d2021d56ea376e26dee9db10cae6ae0c830483d29cf5e160f12\"" Feb 13 18:50:20.016497 containerd[1453]: time="2025-02-13T18:50:20.016475353Z" level=info msg="StartContainer for \"a2c83d6d149a3d2021d56ea376e26dee9db10cae6ae0c830483d29cf5e160f12\"" Feb 13 18:50:20.044866 systemd[1]: Started cri-containerd-57039663ebd6668fe8d54d01dc93ebdce38929987d6f54f2aee766a8d74f6be3.scope - libcontainer container 57039663ebd6668fe8d54d01dc93ebdce38929987d6f54f2aee766a8d74f6be3. Feb 13 18:50:20.045976 systemd[1]: Started cri-containerd-bd462eac8023336e60bd4ff2b67cd8a66ebc1585591c62fb6a04830a3c5bddb6.scope - libcontainer container bd462eac8023336e60bd4ff2b67cd8a66ebc1585591c62fb6a04830a3c5bddb6. Feb 13 18:50:20.049267 systemd[1]: Started cri-containerd-a2c83d6d149a3d2021d56ea376e26dee9db10cae6ae0c830483d29cf5e160f12.scope - libcontainer container a2c83d6d149a3d2021d56ea376e26dee9db10cae6ae0c830483d29cf5e160f12. Feb 13 18:50:20.073966 kubelet[2136]: W0213 18:50:20.073837 2136 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 18:50:20.073966 kubelet[2136]: E0213 18:50:20.073902 2136 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:50:20.081533 containerd[1453]: time="2025-02-13T18:50:20.081488698Z" level=info msg="StartContainer for \"57039663ebd6668fe8d54d01dc93ebdce38929987d6f54f2aee766a8d74f6be3\" returns successfully" Feb 13 18:50:20.101343 kubelet[2136]: W0213 18:50:20.101288 2136 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 18:50:20.101439 kubelet[2136]: E0213 18:50:20.101352 2136 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:50:20.103769 containerd[1453]: time="2025-02-13T18:50:20.103609414Z" level=info msg="StartContainer for \"bd462eac8023336e60bd4ff2b67cd8a66ebc1585591c62fb6a04830a3c5bddb6\" returns successfully" Feb 13 18:50:20.103769 containerd[1453]: time="2025-02-13T18:50:20.103677326Z" level=info msg="StartContainer for \"a2c83d6d149a3d2021d56ea376e26dee9db10cae6ae0c830483d29cf5e160f12\" returns successfully" Feb 13 18:50:20.127010 kubelet[2136]: E0213 18:50:20.126956 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="1.6s" Feb 13 18:50:20.200255 kubelet[2136]: W0213 18:50:20.200159 2136 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Feb 13 18:50:20.200255 kubelet[2136]: E0213 18:50:20.200228 2136 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:50:20.352039 kubelet[2136]: I0213 18:50:20.352002 2136 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 18:50:20.749188 kubelet[2136]: E0213 18:50:20.748661 2136 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 18:50:20.749188 kubelet[2136]: E0213 18:50:20.748787 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:20.754480 kubelet[2136]: E0213 18:50:20.754459 2136 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 18:50:20.754661 kubelet[2136]: E0213 18:50:20.754554 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:20.760284 kubelet[2136]: E0213 18:50:20.760260 2136 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 18:50:20.760400 kubelet[2136]: E0213 18:50:20.760384 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:21.598066 kubelet[2136]: I0213 18:50:21.597979 2136 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 18:50:21.598066 kubelet[2136]: E0213 18:50:21.598015 2136 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 18:50:21.601221 kubelet[2136]: E0213 18:50:21.601117 2136 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:50:21.701677 kubelet[2136]: E0213 18:50:21.701618 2136 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:50:21.762241 kubelet[2136]: E0213 18:50:21.762045 2136 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 18:50:21.762241 kubelet[2136]: E0213 18:50:21.762083 2136 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 18:50:21.762241 kubelet[2136]: E0213 18:50:21.762164 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:21.762241 kubelet[2136]: E0213 18:50:21.762186 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:21.785565 kubelet[2136]: E0213 18:50:21.785531 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Feb 13 18:50:21.802687 kubelet[2136]: E0213 18:50:21.802660 2136 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:50:21.903619 kubelet[2136]: E0213 18:50:21.903154 2136 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:50:22.004151 kubelet[2136]: E0213 18:50:22.004112 2136 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:50:22.104733 kubelet[2136]: E0213 18:50:22.104687 2136 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:50:22.205053 kubelet[2136]: E0213 18:50:22.204949 2136 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:50:22.306007 kubelet[2136]: E0213 18:50:22.305970 2136 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:50:22.406933 kubelet[2136]: E0213 18:50:22.406891 2136 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:50:22.507925 kubelet[2136]: E0213 18:50:22.507562 2136 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:50:22.608599 kubelet[2136]: E0213 18:50:22.608554 2136 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:50:22.709297 kubelet[2136]: E0213 18:50:22.709233 2136 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:50:22.763343 kubelet[2136]: E0213 18:50:22.763181 2136 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 18:50:22.763343 kubelet[2136]: E0213 18:50:22.763315 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:22.810378 kubelet[2136]: E0213 18:50:22.810344 2136 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:50:22.911209 kubelet[2136]: E0213 18:50:22.911160 2136 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:50:23.012303 kubelet[2136]: E0213 18:50:23.012237 2136 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:50:23.122516 kubelet[2136]: I0213 18:50:23.122459 2136 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 18:50:23.135466 kubelet[2136]: I0213 18:50:23.135424 2136 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 18:50:23.138995 kubelet[2136]: I0213 18:50:23.138963 2136 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 18:50:23.332553 systemd[1]: Reloading requested from client PID 2419 ('systemctl') (unit session-5.scope)... Feb 13 18:50:23.332569 systemd[1]: Reloading... Feb 13 18:50:23.392741 zram_generator::config[2456]: No configuration found. Feb 13 18:50:23.481110 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:50:23.544763 systemd[1]: Reloading finished in 211 ms. Feb 13 18:50:23.575121 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:50:23.586692 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 18:50:23.586924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:50:23.586974 systemd[1]: kubelet.service: Consumed 1.678s CPU time, 125.2M memory peak, 0B memory swap peak. Feb 13 18:50:23.601170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:50:23.698228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:50:23.701888 (kubelet)[2500]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 18:50:23.735334 kubelet[2500]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 18:50:23.735334 kubelet[2500]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 18:50:23.735334 kubelet[2500]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 18:50:23.735657 kubelet[2500]: I0213 18:50:23.735330 2500 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 18:50:23.742018 kubelet[2500]: I0213 18:50:23.741984 2500 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 18:50:23.742678 kubelet[2500]: I0213 18:50:23.742129 2500 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 18:50:23.742678 kubelet[2500]: I0213 18:50:23.742533 2500 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 18:50:23.744999 kubelet[2500]: I0213 18:50:23.744975 2500 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 18:50:23.748134 kubelet[2500]: I0213 18:50:23.748093 2500 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 18:50:23.751221 kubelet[2500]: E0213 18:50:23.751193 2500 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 18:50:23.751221 kubelet[2500]: I0213 18:50:23.751219 2500 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 18:50:23.753605 kubelet[2500]: I0213 18:50:23.753573 2500 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 18:50:23.753832 kubelet[2500]: I0213 18:50:23.753803 2500 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 18:50:23.753981 kubelet[2500]: I0213 18:50:23.753828 2500 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 18:50:23.754067 kubelet[2500]: I0213 18:50:23.753992 2500 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 18:50:23.754067 kubelet[2500]: I0213 18:50:23.754001 2500 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 18:50:23.754067 kubelet[2500]: I0213 18:50:23.754058 2500 state_mem.go:36] "Initialized new in-memory state store" Feb 13 18:50:23.754190 kubelet[2500]: I0213 18:50:23.754180 2500 kubelet.go:446] "Attempting to sync node with API server" Feb 13 18:50:23.754721 kubelet[2500]: I0213 18:50:23.754195 2500 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 18:50:23.754721 kubelet[2500]: I0213 18:50:23.754212 2500 kubelet.go:352] "Adding apiserver pod source" Feb 13 18:50:23.754721 kubelet[2500]: I0213 18:50:23.754225 2500 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 18:50:23.755109 kubelet[2500]: I0213 18:50:23.755070 2500 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 18:50:23.755568 kubelet[2500]: I0213 18:50:23.755542 2500 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 18:50:23.755568 kubelet[2500]: I0213 18:50:23.755973 2500 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 18:50:23.755568 kubelet[2500]: I0213 18:50:23.756000 2500 server.go:1287] "Started kubelet" Feb 13 18:50:23.763650 kubelet[2500]: I0213 18:50:23.757932 2500 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 18:50:23.763650 kubelet[2500]: I0213 18:50:23.758392 2500 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 18:50:23.763650 kubelet[2500]: I0213 18:50:23.758512 2500 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 18:50:23.763650 kubelet[2500]: I0213 18:50:23.759661 2500 server.go:490] "Adding debug handlers to kubelet server" Feb 13 18:50:23.763650 kubelet[2500]: E0213 18:50:23.760266 2500 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:50:23.763650 kubelet[2500]: I0213 18:50:23.761509 2500 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 18:50:23.763650 kubelet[2500]: I0213 18:50:23.761657 2500 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 18:50:23.763650 kubelet[2500]: I0213 18:50:23.761997 2500 reconciler.go:26] "Reconciler: start to sync state" Feb 13 18:50:23.766801 kubelet[2500]: I0213 18:50:23.766753 2500 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 18:50:23.767064 kubelet[2500]: I0213 18:50:23.767046 2500 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 18:50:23.771264 kubelet[2500]: I0213 18:50:23.771241 2500 factory.go:221] Registration of the containerd container factory successfully Feb 13 18:50:23.771264 kubelet[2500]: I0213 18:50:23.771261 2500 factory.go:221] Registration of the systemd container factory successfully Feb 13 18:50:23.771367 kubelet[2500]: I0213 18:50:23.771348 2500 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 18:50:23.787762 kubelet[2500]: I0213 18:50:23.786566 2500 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 18:50:23.787762 kubelet[2500]: I0213 18:50:23.787581 2500 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 18:50:23.787762 kubelet[2500]: I0213 18:50:23.787599 2500 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 18:50:23.787762 kubelet[2500]: I0213 18:50:23.787617 2500 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 18:50:23.787762 kubelet[2500]: I0213 18:50:23.787623 2500 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 18:50:23.787762 kubelet[2500]: E0213 18:50:23.787701 2500 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 18:50:23.809014 kubelet[2500]: I0213 18:50:23.808990 2500 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 18:50:23.809653 kubelet[2500]: I0213 18:50:23.809139 2500 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 18:50:23.809653 kubelet[2500]: I0213 18:50:23.809164 2500 state_mem.go:36] "Initialized new in-memory state store" Feb 13 18:50:23.809653 kubelet[2500]: I0213 18:50:23.809327 2500 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 18:50:23.809653 kubelet[2500]: I0213 18:50:23.809338 2500 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 18:50:23.809653 kubelet[2500]: I0213 18:50:23.809354 2500 policy_none.go:49] "None policy: Start" Feb 13 18:50:23.809653 kubelet[2500]: I0213 18:50:23.809362 2500 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 18:50:23.809653 kubelet[2500]: I0213 18:50:23.809370 2500 state_mem.go:35] "Initializing new in-memory state store" Feb 13 18:50:23.809653 kubelet[2500]: I0213 18:50:23.809458 2500 state_mem.go:75] "Updated machine memory state" Feb 13 18:50:23.815111 kubelet[2500]: I0213 18:50:23.815078 2500 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 18:50:23.815246 kubelet[2500]: I0213 18:50:23.815230 2500 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 18:50:23.815279 kubelet[2500]: I0213 18:50:23.815250 2500 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 18:50:23.815581 kubelet[2500]: I0213 18:50:23.815472 2500 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 18:50:23.816258 kubelet[2500]: E0213 18:50:23.816241 2500 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 18:50:23.888765 kubelet[2500]: I0213 18:50:23.888729 2500 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 18:50:23.888892 kubelet[2500]: I0213 18:50:23.888845 2500 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 18:50:23.888892 kubelet[2500]: I0213 18:50:23.888729 2500 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 18:50:23.893996 kubelet[2500]: E0213 18:50:23.893967 2500 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 18:50:23.893996 kubelet[2500]: E0213 18:50:23.893990 2500 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 18:50:23.894115 kubelet[2500]: E0213 18:50:23.893982 2500 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 18:50:23.919339 kubelet[2500]: I0213 18:50:23.919312 2500 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 18:50:23.925626 kubelet[2500]: I0213 18:50:23.925568 2500 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Feb 13 18:50:23.925712 kubelet[2500]: I0213 18:50:23.925651 2500 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 18:50:24.063355 kubelet[2500]: I0213 18:50:24.063199 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1984cadcb53beed1cf699d6d7941543-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1984cadcb53beed1cf699d6d7941543\") " pod="kube-system/kube-apiserver-localhost" Feb 13 18:50:24.063355 kubelet[2500]: I0213 18:50:24.063238 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:50:24.063355 kubelet[2500]: I0213 18:50:24.063257 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:50:24.063355 kubelet[2500]: I0213 18:50:24.063273 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 18:50:24.063597 kubelet[2500]: I0213 18:50:24.063342 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1984cadcb53beed1cf699d6d7941543-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1984cadcb53beed1cf699d6d7941543\") " pod="kube-system/kube-apiserver-localhost" Feb 13 18:50:24.063597 kubelet[2500]: I0213 18:50:24.063451 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1984cadcb53beed1cf699d6d7941543-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e1984cadcb53beed1cf699d6d7941543\") " pod="kube-system/kube-apiserver-localhost" Feb 13 18:50:24.063597 kubelet[2500]: I0213 18:50:24.063477 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:50:24.063597 kubelet[2500]: I0213 18:50:24.063493 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:50:24.063597 kubelet[2500]: I0213 18:50:24.063513 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:50:24.194459 kubelet[2500]: E0213 18:50:24.194420 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:24.194557 kubelet[2500]: E0213 18:50:24.194433 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:24.194750 kubelet[2500]: E0213 18:50:24.194584 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:24.755119 kubelet[2500]: I0213 18:50:24.755084 2500 apiserver.go:52] "Watching apiserver" Feb 13 18:50:24.762604 kubelet[2500]: I0213 18:50:24.762577 2500 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 18:50:24.796619 kubelet[2500]: E0213 18:50:24.796592 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:24.797125 kubelet[2500]: E0213 18:50:24.797031 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:24.797498 kubelet[2500]: I0213 18:50:24.797431 2500 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 18:50:24.802594 kubelet[2500]: E0213 18:50:24.802528 2500 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 18:50:24.803539 kubelet[2500]: E0213 18:50:24.803190 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:24.816985 kubelet[2500]: I0213 18:50:24.816889 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.816874396 podStartE2EDuration="1.816874396s" podCreationTimestamp="2025-02-13 18:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:50:24.816757479 +0000 UTC m=+1.111964699" watchObservedRunningTime="2025-02-13 18:50:24.816874396 +0000 UTC m=+1.112081616" Feb 13 18:50:24.827925 kubelet[2500]: I0213 18:50:24.827835 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8278227870000001 podStartE2EDuration="1.827822787s" podCreationTimestamp="2025-02-13 18:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:50:24.827649032 +0000 UTC m=+1.122856252" watchObservedRunningTime="2025-02-13 18:50:24.827822787 +0000 UTC m=+1.123030007" Feb 13 18:50:24.842545 kubelet[2500]: I0213 18:50:24.842401 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8423887639999998 podStartE2EDuration="1.842388764s" podCreationTimestamp="2025-02-13 18:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:50:24.834482372 +0000 UTC m=+1.129689632" watchObservedRunningTime="2025-02-13 18:50:24.842388764 +0000 UTC m=+1.137595944" Feb 13 18:50:25.032335 sudo[1588]: pam_unix(sudo:session): session closed for user root Feb 13 18:50:25.035869 sshd[1587]: Connection closed by 10.0.0.1 port 37538 Feb 13 18:50:25.036216 sshd-session[1585]: pam_unix(sshd:session): session closed for user core Feb 13 18:50:25.039810 systemd[1]: sshd@4-10.0.0.31:22-10.0.0.1:37538.service: Deactivated successfully. Feb 13 18:50:25.041376 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 18:50:25.041554 systemd[1]: session-5.scope: Consumed 6.101s CPU time, 160.3M memory peak, 0B memory swap peak. Feb 13 18:50:25.043255 systemd-logind[1437]: Session 5 logged out. Waiting for processes to exit. Feb 13 18:50:25.044411 systemd-logind[1437]: Removed session 5. Feb 13 18:50:25.797847 kubelet[2500]: E0213 18:50:25.797683 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:25.797847 kubelet[2500]: E0213 18:50:25.797786 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:26.798853 kubelet[2500]: E0213 18:50:26.798729 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:26.798853 kubelet[2500]: E0213 18:50:26.798781 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:29.247650 kubelet[2500]: I0213 18:50:29.247604 2500 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 18:50:29.248058 containerd[1453]: time="2025-02-13T18:50:29.248011875Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 18:50:29.248257 kubelet[2500]: I0213 18:50:29.248202 2500 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 18:50:29.683723 kubelet[2500]: E0213 18:50:29.683318 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:29.938568 systemd[1]: Created slice kubepods-besteffort-pod9ce3a99b_2ab4_4113_ba68_8e03030f29ad.slice - libcontainer container kubepods-besteffort-pod9ce3a99b_2ab4_4113_ba68_8e03030f29ad.slice. Feb 13 18:50:29.963231 systemd[1]: Created slice kubepods-burstable-podc79fb885_0791_4ba3_b119_298f20b430f9.slice - libcontainer container kubepods-burstable-podc79fb885_0791_4ba3_b119_298f20b430f9.slice. Feb 13 18:50:30.002102 kubelet[2500]: I0213 18:50:30.002053 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/c79fb885-0791-4ba3-b119-298f20b430f9-cni\") pod \"kube-flannel-ds-kcmsh\" (UID: \"c79fb885-0791-4ba3-b119-298f20b430f9\") " pod="kube-flannel/kube-flannel-ds-kcmsh" Feb 13 18:50:30.002207 kubelet[2500]: I0213 18:50:30.002126 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c79fb885-0791-4ba3-b119-298f20b430f9-xtables-lock\") pod \"kube-flannel-ds-kcmsh\" (UID: \"c79fb885-0791-4ba3-b119-298f20b430f9\") " pod="kube-flannel/kube-flannel-ds-kcmsh" Feb 13 18:50:30.002207 kubelet[2500]: I0213 18:50:30.002153 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ce3a99b-2ab4-4113-ba68-8e03030f29ad-xtables-lock\") pod \"kube-proxy-4ws4w\" (UID: \"9ce3a99b-2ab4-4113-ba68-8e03030f29ad\") " pod="kube-system/kube-proxy-4ws4w" Feb 13 18:50:30.002207 kubelet[2500]: I0213 18:50:30.002168 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c79fb885-0791-4ba3-b119-298f20b430f9-run\") pod \"kube-flannel-ds-kcmsh\" (UID: \"c79fb885-0791-4ba3-b119-298f20b430f9\") " pod="kube-flannel/kube-flannel-ds-kcmsh" Feb 13 18:50:30.002207 kubelet[2500]: I0213 18:50:30.002181 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/c79fb885-0791-4ba3-b119-298f20b430f9-cni-plugin\") pod \"kube-flannel-ds-kcmsh\" (UID: \"c79fb885-0791-4ba3-b119-298f20b430f9\") " pod="kube-flannel/kube-flannel-ds-kcmsh" Feb 13 18:50:30.002329 kubelet[2500]: I0213 18:50:30.002222 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/c79fb885-0791-4ba3-b119-298f20b430f9-flannel-cfg\") pod \"kube-flannel-ds-kcmsh\" (UID: \"c79fb885-0791-4ba3-b119-298f20b430f9\") " pod="kube-flannel/kube-flannel-ds-kcmsh" Feb 13 18:50:30.002329 kubelet[2500]: I0213 18:50:30.002249 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpk5v\" (UniqueName: \"kubernetes.io/projected/c79fb885-0791-4ba3-b119-298f20b430f9-kube-api-access-wpk5v\") pod \"kube-flannel-ds-kcmsh\" (UID: \"c79fb885-0791-4ba3-b119-298f20b430f9\") " pod="kube-flannel/kube-flannel-ds-kcmsh" Feb 13 18:50:30.002329 kubelet[2500]: I0213 18:50:30.002320 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9ce3a99b-2ab4-4113-ba68-8e03030f29ad-kube-proxy\") pod \"kube-proxy-4ws4w\" (UID: \"9ce3a99b-2ab4-4113-ba68-8e03030f29ad\") " pod="kube-system/kube-proxy-4ws4w" Feb 13 18:50:30.002387 kubelet[2500]: I0213 18:50:30.002352 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ce3a99b-2ab4-4113-ba68-8e03030f29ad-lib-modules\") pod \"kube-proxy-4ws4w\" (UID: \"9ce3a99b-2ab4-4113-ba68-8e03030f29ad\") " pod="kube-system/kube-proxy-4ws4w" Feb 13 18:50:30.002387 kubelet[2500]: I0213 18:50:30.002368 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vn62\" (UniqueName: \"kubernetes.io/projected/9ce3a99b-2ab4-4113-ba68-8e03030f29ad-kube-api-access-6vn62\") pod \"kube-proxy-4ws4w\" (UID: \"9ce3a99b-2ab4-4113-ba68-8e03030f29ad\") " pod="kube-system/kube-proxy-4ws4w" Feb 13 18:50:30.261775 kubelet[2500]: E0213 18:50:30.261668 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:30.262276 containerd[1453]: time="2025-02-13T18:50:30.262235772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4ws4w,Uid:9ce3a99b-2ab4-4113-ba68-8e03030f29ad,Namespace:kube-system,Attempt:0,}" Feb 13 18:50:30.269313 kubelet[2500]: E0213 18:50:30.269274 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:30.270416 containerd[1453]: time="2025-02-13T18:50:30.270317499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-kcmsh,Uid:c79fb885-0791-4ba3-b119-298f20b430f9,Namespace:kube-flannel,Attempt:0,}" Feb 13 18:50:30.293781 containerd[1453]: time="2025-02-13T18:50:30.293512100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:50:30.293781 containerd[1453]: time="2025-02-13T18:50:30.293641658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:50:30.293781 containerd[1453]: time="2025-02-13T18:50:30.293663977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:50:30.293781 containerd[1453]: time="2025-02-13T18:50:30.293738016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:50:30.298714 containerd[1453]: time="2025-02-13T18:50:30.298135292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:50:30.298714 containerd[1453]: time="2025-02-13T18:50:30.298192411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:50:30.298714 containerd[1453]: time="2025-02-13T18:50:30.298213251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:50:30.298714 containerd[1453]: time="2025-02-13T18:50:30.298312169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:50:30.319829 systemd[1]: Started cri-containerd-9069d7dd66b2fa98fe0e3189e3eb14cd5fdbf0cbff073d80689fef3f69258f9c.scope - libcontainer container 9069d7dd66b2fa98fe0e3189e3eb14cd5fdbf0cbff073d80689fef3f69258f9c. Feb 13 18:50:30.323773 systemd[1]: Started cri-containerd-bcaba6f9c91ae07560f968b2b9f39ac0bc3000b67f11d5a673075057e69db592.scope - libcontainer container bcaba6f9c91ae07560f968b2b9f39ac0bc3000b67f11d5a673075057e69db592. Feb 13 18:50:30.341602 containerd[1453]: time="2025-02-13T18:50:30.341511551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4ws4w,Uid:9ce3a99b-2ab4-4113-ba68-8e03030f29ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"9069d7dd66b2fa98fe0e3189e3eb14cd5fdbf0cbff073d80689fef3f69258f9c\"" Feb 13 18:50:30.342371 kubelet[2500]: E0213 18:50:30.342332 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:30.347762 containerd[1453]: time="2025-02-13T18:50:30.347615156Z" level=info msg="CreateContainer within sandbox \"9069d7dd66b2fa98fe0e3189e3eb14cd5fdbf0cbff073d80689fef3f69258f9c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 18:50:30.354433 containerd[1453]: time="2025-02-13T18:50:30.354389508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-kcmsh,Uid:c79fb885-0791-4ba3-b119-298f20b430f9,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"bcaba6f9c91ae07560f968b2b9f39ac0bc3000b67f11d5a673075057e69db592\"" Feb 13 18:50:30.355088 kubelet[2500]: E0213 18:50:30.355062 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:30.357106 containerd[1453]: time="2025-02-13T18:50:30.357081937Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 18:50:30.363079 containerd[1453]: time="2025-02-13T18:50:30.362982945Z" level=info msg="CreateContainer within sandbox \"9069d7dd66b2fa98fe0e3189e3eb14cd5fdbf0cbff073d80689fef3f69258f9c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"276e53b1d729b3dae8f0b6683b44e6f8864062c0f0b649af8c9a52997e8b0dbf\"" Feb 13 18:50:30.363625 containerd[1453]: time="2025-02-13T18:50:30.363583894Z" level=info msg="StartContainer for \"276e53b1d729b3dae8f0b6683b44e6f8864062c0f0b649af8c9a52997e8b0dbf\"" Feb 13 18:50:30.385819 systemd[1]: Started cri-containerd-276e53b1d729b3dae8f0b6683b44e6f8864062c0f0b649af8c9a52997e8b0dbf.scope - libcontainer container 276e53b1d729b3dae8f0b6683b44e6f8864062c0f0b649af8c9a52997e8b0dbf. Feb 13 18:50:30.414658 containerd[1453]: time="2025-02-13T18:50:30.414594488Z" level=info msg="StartContainer for \"276e53b1d729b3dae8f0b6683b44e6f8864062c0f0b649af8c9a52997e8b0dbf\" returns successfully" Feb 13 18:50:30.807063 kubelet[2500]: E0213 18:50:30.807033 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:30.817422 kubelet[2500]: I0213 18:50:30.817364 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4ws4w" podStartSLOduration=1.817347785 podStartE2EDuration="1.817347785s" podCreationTimestamp="2025-02-13 18:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:50:30.817178348 +0000 UTC m=+7.112385568" watchObservedRunningTime="2025-02-13 18:50:30.817347785 +0000 UTC m=+7.112554965" Feb 13 18:50:31.669709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1165786837.mount: Deactivated successfully. Feb 13 18:50:31.693843 containerd[1453]: time="2025-02-13T18:50:31.693794353Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:31.695781 containerd[1453]: time="2025-02-13T18:50:31.695727518Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 18:50:31.696401 containerd[1453]: time="2025-02-13T18:50:31.696378987Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:31.698791 containerd[1453]: time="2025-02-13T18:50:31.698761424Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:31.699825 containerd[1453]: time="2025-02-13T18:50:31.699503851Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.342392995s" Feb 13 18:50:31.699825 containerd[1453]: time="2025-02-13T18:50:31.699535810Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 18:50:31.702074 containerd[1453]: time="2025-02-13T18:50:31.702039805Z" level=info msg="CreateContainer within sandbox \"bcaba6f9c91ae07560f968b2b9f39ac0bc3000b67f11d5a673075057e69db592\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 18:50:31.717873 containerd[1453]: time="2025-02-13T18:50:31.717787562Z" level=info msg="CreateContainer within sandbox \"bcaba6f9c91ae07560f968b2b9f39ac0bc3000b67f11d5a673075057e69db592\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"241d47f1015aa97c067313f02f44f6c4dbb9ba68bc3b5f793ade655b59b53044\"" Feb 13 18:50:31.718928 containerd[1453]: time="2025-02-13T18:50:31.718199955Z" level=info msg="StartContainer for \"241d47f1015aa97c067313f02f44f6c4dbb9ba68bc3b5f793ade655b59b53044\"" Feb 13 18:50:31.759856 systemd[1]: Started cri-containerd-241d47f1015aa97c067313f02f44f6c4dbb9ba68bc3b5f793ade655b59b53044.scope - libcontainer container 241d47f1015aa97c067313f02f44f6c4dbb9ba68bc3b5f793ade655b59b53044. Feb 13 18:50:31.783014 containerd[1453]: time="2025-02-13T18:50:31.782961833Z" level=info msg="StartContainer for \"241d47f1015aa97c067313f02f44f6c4dbb9ba68bc3b5f793ade655b59b53044\" returns successfully" Feb 13 18:50:31.787048 systemd[1]: cri-containerd-241d47f1015aa97c067313f02f44f6c4dbb9ba68bc3b5f793ade655b59b53044.scope: Deactivated successfully. Feb 13 18:50:31.810881 kubelet[2500]: E0213 18:50:31.810825 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:31.828207 containerd[1453]: time="2025-02-13T18:50:31.828150781Z" level=info msg="shim disconnected" id=241d47f1015aa97c067313f02f44f6c4dbb9ba68bc3b5f793ade655b59b53044 namespace=k8s.io Feb 13 18:50:31.828505 containerd[1453]: time="2025-02-13T18:50:31.828361018Z" level=warning msg="cleaning up after shim disconnected" id=241d47f1015aa97c067313f02f44f6c4dbb9ba68bc3b5f793ade655b59b53044 namespace=k8s.io Feb 13 18:50:31.828505 containerd[1453]: time="2025-02-13T18:50:31.828375417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:50:32.814323 kubelet[2500]: E0213 18:50:32.814232 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:32.817658 containerd[1453]: time="2025-02-13T18:50:32.817253490Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 18:50:34.081121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2120938558.mount: Deactivated successfully. Feb 13 18:50:34.755357 kubelet[2500]: E0213 18:50:34.754759 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:34.816925 kubelet[2500]: E0213 18:50:34.816894 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:35.099720 containerd[1453]: time="2025-02-13T18:50:35.099665978Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:35.100196 containerd[1453]: time="2025-02-13T18:50:35.100149451Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Feb 13 18:50:35.100820 containerd[1453]: time="2025-02-13T18:50:35.100791322Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:35.103989 containerd[1453]: time="2025-02-13T18:50:35.103937636Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:50:35.105008 containerd[1453]: time="2025-02-13T18:50:35.104893702Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 2.287593972s" Feb 13 18:50:35.105008 containerd[1453]: time="2025-02-13T18:50:35.104924621Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 18:50:35.108129 containerd[1453]: time="2025-02-13T18:50:35.107998816Z" level=info msg="CreateContainer within sandbox \"bcaba6f9c91ae07560f968b2b9f39ac0bc3000b67f11d5a673075057e69db592\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 18:50:35.117365 containerd[1453]: time="2025-02-13T18:50:35.117312000Z" level=info msg="CreateContainer within sandbox \"bcaba6f9c91ae07560f968b2b9f39ac0bc3000b67f11d5a673075057e69db592\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"91a58297124c2cd4d604fbcd94d322245b19f66dbf60bf5946cd0cf6a1ddae74\"" Feb 13 18:50:35.117864 containerd[1453]: time="2025-02-13T18:50:35.117808233Z" level=info msg="StartContainer for \"91a58297124c2cd4d604fbcd94d322245b19f66dbf60bf5946cd0cf6a1ddae74\"" Feb 13 18:50:35.137605 systemd[1]: run-containerd-runc-k8s.io-91a58297124c2cd4d604fbcd94d322245b19f66dbf60bf5946cd0cf6a1ddae74-runc.JnzFJg.mount: Deactivated successfully. Feb 13 18:50:35.144922 update_engine[1443]: I20250213 18:50:35.144852 1443 update_attempter.cc:509] Updating boot flags... Feb 13 18:50:35.148821 systemd[1]: Started cri-containerd-91a58297124c2cd4d604fbcd94d322245b19f66dbf60bf5946cd0cf6a1ddae74.scope - libcontainer container 91a58297124c2cd4d604fbcd94d322245b19f66dbf60bf5946cd0cf6a1ddae74. Feb 13 18:50:35.184733 containerd[1453]: time="2025-02-13T18:50:35.184513498Z" level=info msg="StartContainer for \"91a58297124c2cd4d604fbcd94d322245b19f66dbf60bf5946cd0cf6a1ddae74\" returns successfully" Feb 13 18:50:35.185806 systemd[1]: cri-containerd-91a58297124c2cd4d604fbcd94d322245b19f66dbf60bf5946cd0cf6a1ddae74.scope: Deactivated successfully. Feb 13 18:50:35.196917 kubelet[2500]: E0213 18:50:35.196878 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:35.200696 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2952) Feb 13 18:50:35.223434 kubelet[2500]: I0213 18:50:35.222206 2500 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 18:50:35.238677 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2952) Feb 13 18:50:35.257931 kubelet[2500]: W0213 18:50:35.257713 2500 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 13 18:50:35.257931 kubelet[2500]: E0213 18:50:35.257757 2500 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Feb 13 18:50:35.258116 kubelet[2500]: I0213 18:50:35.257912 2500 status_manager.go:890] "Failed to get status for pod" podUID="dac8e41d-a971-483b-8b13-eb1d1df68718" pod="kube-system/coredns-668d6bf9bc-dmjk6" err="pods \"coredns-668d6bf9bc-dmjk6\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Feb 13 18:50:35.277707 systemd[1]: Created slice kubepods-burstable-poddac8e41d_a971_483b_8b13_eb1d1df68718.slice - libcontainer container kubepods-burstable-poddac8e41d_a971_483b_8b13_eb1d1df68718.slice. Feb 13 18:50:35.292644 systemd[1]: Created slice kubepods-burstable-pod87afa5ca_71c7_4f79_83a9_42e7dfaa92e0.slice - libcontainer container kubepods-burstable-pod87afa5ca_71c7_4f79_83a9_42e7dfaa92e0.slice. Feb 13 18:50:35.303730 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2952) Feb 13 18:50:35.335978 kubelet[2500]: I0213 18:50:35.335938 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dac8e41d-a971-483b-8b13-eb1d1df68718-config-volume\") pod \"coredns-668d6bf9bc-dmjk6\" (UID: \"dac8e41d-a971-483b-8b13-eb1d1df68718\") " pod="kube-system/coredns-668d6bf9bc-dmjk6" Feb 13 18:50:35.335978 kubelet[2500]: I0213 18:50:35.335985 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87afa5ca-71c7-4f79-83a9-42e7dfaa92e0-config-volume\") pod \"coredns-668d6bf9bc-58s8k\" (UID: \"87afa5ca-71c7-4f79-83a9-42e7dfaa92e0\") " pod="kube-system/coredns-668d6bf9bc-58s8k" Feb 13 18:50:35.336124 kubelet[2500]: I0213 18:50:35.336009 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78cwp\" (UniqueName: \"kubernetes.io/projected/87afa5ca-71c7-4f79-83a9-42e7dfaa92e0-kube-api-access-78cwp\") pod \"coredns-668d6bf9bc-58s8k\" (UID: \"87afa5ca-71c7-4f79-83a9-42e7dfaa92e0\") " pod="kube-system/coredns-668d6bf9bc-58s8k" Feb 13 18:50:35.336124 kubelet[2500]: I0213 18:50:35.336031 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7lz5\" (UniqueName: \"kubernetes.io/projected/dac8e41d-a971-483b-8b13-eb1d1df68718-kube-api-access-w7lz5\") pod \"coredns-668d6bf9bc-dmjk6\" (UID: \"dac8e41d-a971-483b-8b13-eb1d1df68718\") " pod="kube-system/coredns-668d6bf9bc-dmjk6" Feb 13 18:50:35.337538 containerd[1453]: time="2025-02-13T18:50:35.337454504Z" level=info msg="shim disconnected" id=91a58297124c2cd4d604fbcd94d322245b19f66dbf60bf5946cd0cf6a1ddae74 namespace=k8s.io Feb 13 18:50:35.337977 containerd[1453]: time="2025-02-13T18:50:35.337678780Z" level=warning msg="cleaning up after shim disconnected" id=91a58297124c2cd4d604fbcd94d322245b19f66dbf60bf5946cd0cf6a1ddae74 namespace=k8s.io Feb 13 18:50:35.337977 containerd[1453]: time="2025-02-13T18:50:35.337698780Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:50:35.820119 kubelet[2500]: E0213 18:50:35.820091 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:35.820930 kubelet[2500]: E0213 18:50:35.820144 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:35.823223 containerd[1453]: time="2025-02-13T18:50:35.823188206Z" level=info msg="CreateContainer within sandbox \"bcaba6f9c91ae07560f968b2b9f39ac0bc3000b67f11d5a673075057e69db592\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 18:50:35.835185 containerd[1453]: time="2025-02-13T18:50:35.835070193Z" level=info msg="CreateContainer within sandbox \"bcaba6f9c91ae07560f968b2b9f39ac0bc3000b67f11d5a673075057e69db592\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"2fffa2e11bc9c5fc639c1e847a8694b29b816e5ad4739f09b9db659f31c171b5\"" Feb 13 18:50:35.835537 containerd[1453]: time="2025-02-13T18:50:35.835474707Z" level=info msg="StartContainer for \"2fffa2e11bc9c5fc639c1e847a8694b29b816e5ad4739f09b9db659f31c171b5\"" Feb 13 18:50:35.862801 systemd[1]: Started cri-containerd-2fffa2e11bc9c5fc639c1e847a8694b29b816e5ad4739f09b9db659f31c171b5.scope - libcontainer container 2fffa2e11bc9c5fc639c1e847a8694b29b816e5ad4739f09b9db659f31c171b5. Feb 13 18:50:35.888453 containerd[1453]: time="2025-02-13T18:50:35.888403973Z" level=info msg="StartContainer for \"2fffa2e11bc9c5fc639c1e847a8694b29b816e5ad4739f09b9db659f31c171b5\" returns successfully" Feb 13 18:50:36.117479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91a58297124c2cd4d604fbcd94d322245b19f66dbf60bf5946cd0cf6a1ddae74-rootfs.mount: Deactivated successfully. Feb 13 18:50:36.438103 kubelet[2500]: E0213 18:50:36.437786 2500 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Feb 13 18:50:36.438103 kubelet[2500]: E0213 18:50:36.437873 2500 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87afa5ca-71c7-4f79-83a9-42e7dfaa92e0-config-volume podName:87afa5ca-71c7-4f79-83a9-42e7dfaa92e0 nodeName:}" failed. No retries permitted until 2025-02-13 18:50:36.937852613 +0000 UTC m=+13.233059833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/87afa5ca-71c7-4f79-83a9-42e7dfaa92e0-config-volume") pod "coredns-668d6bf9bc-58s8k" (UID: "87afa5ca-71c7-4f79-83a9-42e7dfaa92e0") : failed to sync configmap cache: timed out waiting for the condition Feb 13 18:50:36.438980 kubelet[2500]: E0213 18:50:36.438317 2500 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Feb 13 18:50:36.438980 kubelet[2500]: E0213 18:50:36.438370 2500 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dac8e41d-a971-483b-8b13-eb1d1df68718-config-volume podName:dac8e41d-a971-483b-8b13-eb1d1df68718 nodeName:}" failed. No retries permitted until 2025-02-13 18:50:36.938356406 +0000 UTC m=+13.233563626 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dac8e41d-a971-483b-8b13-eb1d1df68718-config-volume") pod "coredns-668d6bf9bc-dmjk6" (UID: "dac8e41d-a971-483b-8b13-eb1d1df68718") : failed to sync configmap cache: timed out waiting for the condition Feb 13 18:50:36.823087 kubelet[2500]: E0213 18:50:36.823044 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:36.834248 kubelet[2500]: I0213 18:50:36.833859 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-kcmsh" podStartSLOduration=3.083473347 podStartE2EDuration="7.833838948s" podCreationTimestamp="2025-02-13 18:50:29 +0000 UTC" firstStartedPulling="2025-02-13 18:50:30.355502167 +0000 UTC m=+6.650709387" lastFinishedPulling="2025-02-13 18:50:35.105867768 +0000 UTC m=+11.401074988" observedRunningTime="2025-02-13 18:50:36.833193557 +0000 UTC m=+13.128400777" watchObservedRunningTime="2025-02-13 18:50:36.833838948 +0000 UTC m=+13.129046168" Feb 13 18:50:36.966367 systemd-networkd[1395]: flannel.1: Link UP Feb 13 18:50:36.966376 systemd-networkd[1395]: flannel.1: Gained carrier Feb 13 18:50:37.084741 kubelet[2500]: E0213 18:50:37.084136 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:37.085152 containerd[1453]: time="2025-02-13T18:50:37.085108750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dmjk6,Uid:dac8e41d-a971-483b-8b13-eb1d1df68718,Namespace:kube-system,Attempt:0,}" Feb 13 18:50:37.103962 kubelet[2500]: E0213 18:50:37.103921 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:37.105056 containerd[1453]: time="2025-02-13T18:50:37.104442494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58s8k,Uid:87afa5ca-71c7-4f79-83a9-42e7dfaa92e0,Namespace:kube-system,Attempt:0,}" Feb 13 18:50:37.157785 systemd-networkd[1395]: cni0: Link UP Feb 13 18:50:37.157796 systemd-networkd[1395]: cni0: Gained carrier Feb 13 18:50:37.158042 systemd-networkd[1395]: cni0: Lost carrier Feb 13 18:50:37.162044 systemd-networkd[1395]: veth4133e6b8: Link UP Feb 13 18:50:37.164759 kernel: cni0: port 1(veth4133e6b8) entered blocking state Feb 13 18:50:37.164845 kernel: cni0: port 1(veth4133e6b8) entered disabled state Feb 13 18:50:37.164864 kernel: veth4133e6b8: entered allmulticast mode Feb 13 18:50:37.164880 kernel: veth4133e6b8: entered promiscuous mode Feb 13 18:50:37.166076 kernel: cni0: port 1(veth4133e6b8) entered blocking state Feb 13 18:50:37.166149 kernel: cni0: port 1(veth4133e6b8) entered forwarding state Feb 13 18:50:37.169716 kernel: cni0: port 1(veth4133e6b8) entered disabled state Feb 13 18:50:37.170015 systemd-networkd[1395]: veth4d376159: Link UP Feb 13 18:50:37.172114 kernel: cni0: port 2(veth4d376159) entered blocking state Feb 13 18:50:37.172200 kernel: cni0: port 2(veth4d376159) entered disabled state Feb 13 18:50:37.172220 kernel: veth4d376159: entered allmulticast mode Feb 13 18:50:37.173149 kernel: veth4d376159: entered promiscuous mode Feb 13 18:50:37.174056 kernel: cni0: port 2(veth4d376159) entered blocking state Feb 13 18:50:37.174107 kernel: cni0: port 2(veth4d376159) entered forwarding state Feb 13 18:50:37.175401 systemd-networkd[1395]: cni0: Gained carrier Feb 13 18:50:37.175653 kernel: cni0: port 2(veth4d376159) entered disabled state Feb 13 18:50:37.175697 kernel: cni0: port 1(veth4133e6b8) entered blocking state Feb 13 18:50:37.176698 kernel: cni0: port 1(veth4133e6b8) entered forwarding state Feb 13 18:50:37.176841 systemd-networkd[1395]: veth4133e6b8: Gained carrier Feb 13 18:50:37.185565 containerd[1453]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000b48e8), "name":"cbr0", "type":"bridge"} Feb 13 18:50:37.185565 containerd[1453]: delegateAdd: netconf sent to delegate plugin: Feb 13 18:50:37.187155 kernel: cni0: port 2(veth4d376159) entered blocking state Feb 13 18:50:37.187203 kernel: cni0: port 2(veth4d376159) entered forwarding state Feb 13 18:50:37.187383 systemd-networkd[1395]: veth4d376159: Gained carrier Feb 13 18:50:37.192939 containerd[1453]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Feb 13 18:50:37.192939 containerd[1453]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001208e8), "name":"cbr0", "type":"bridge"} Feb 13 18:50:37.192939 containerd[1453]: delegateAdd: netconf sent to delegate plugin: Feb 13 18:50:37.246977 containerd[1453]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T18:50:37.246583693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:50:37.246977 containerd[1453]: time="2025-02-13T18:50:37.246659172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:50:37.246977 containerd[1453]: time="2025-02-13T18:50:37.246670531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:50:37.246977 containerd[1453]: time="2025-02-13T18:50:37.246748250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:50:37.248160 containerd[1453]: time="2025-02-13T18:50:37.248034633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:50:37.248297 containerd[1453]: time="2025-02-13T18:50:37.248258350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:50:37.248383 containerd[1453]: time="2025-02-13T18:50:37.248348709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:50:37.248990 containerd[1453]: time="2025-02-13T18:50:37.248948381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:50:37.272832 systemd[1]: Started cri-containerd-f65468aef2ef4df107a38ca88102098c9b1d690f373264f6711f88d87aea6a50.scope - libcontainer container f65468aef2ef4df107a38ca88102098c9b1d690f373264f6711f88d87aea6a50. Feb 13 18:50:37.277873 systemd[1]: Started cri-containerd-0d962d3b4168ab0b97e2520ec90dc2e315ce8e23299e98902b1623caffd7c1e2.scope - libcontainer container 0d962d3b4168ab0b97e2520ec90dc2e315ce8e23299e98902b1623caffd7c1e2. Feb 13 18:50:37.285563 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 18:50:37.290031 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 18:50:37.310392 containerd[1453]: time="2025-02-13T18:50:37.310255730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58s8k,Uid:87afa5ca-71c7-4f79-83a9-42e7dfaa92e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f65468aef2ef4df107a38ca88102098c9b1d690f373264f6711f88d87aea6a50\"" Feb 13 18:50:37.311219 kubelet[2500]: E0213 18:50:37.311194 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:37.313441 containerd[1453]: time="2025-02-13T18:50:37.313105972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dmjk6,Uid:dac8e41d-a971-483b-8b13-eb1d1df68718,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d962d3b4168ab0b97e2520ec90dc2e315ce8e23299e98902b1623caffd7c1e2\"" Feb 13 18:50:37.313441 containerd[1453]: time="2025-02-13T18:50:37.313342249Z" level=info msg="CreateContainer within sandbox \"f65468aef2ef4df107a38ca88102098c9b1d690f373264f6711f88d87aea6a50\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 18:50:37.314605 kubelet[2500]: E0213 18:50:37.314581 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:37.320155 containerd[1453]: time="2025-02-13T18:50:37.319983561Z" level=info msg="CreateContainer within sandbox \"0d962d3b4168ab0b97e2520ec90dc2e315ce8e23299e98902b1623caffd7c1e2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 18:50:37.340996 containerd[1453]: time="2025-02-13T18:50:37.340868724Z" level=info msg="CreateContainer within sandbox \"f65468aef2ef4df107a38ca88102098c9b1d690f373264f6711f88d87aea6a50\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"67c77bbd58580814e41c48d2df6c2428d1a0945a40848ead43b91698a23e5994\"" Feb 13 18:50:37.342858 containerd[1453]: time="2025-02-13T18:50:37.342813939Z" level=info msg="StartContainer for \"67c77bbd58580814e41c48d2df6c2428d1a0945a40848ead43b91698a23e5994\"" Feb 13 18:50:37.350685 containerd[1453]: time="2025-02-13T18:50:37.350622235Z" level=info msg="CreateContainer within sandbox \"0d962d3b4168ab0b97e2520ec90dc2e315ce8e23299e98902b1623caffd7c1e2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d934b835f9d2c7abfe133db8821494b6aeacbb0ebec459e44b4837fd23b74ee\"" Feb 13 18:50:37.351447 containerd[1453]: time="2025-02-13T18:50:37.351321666Z" level=info msg="StartContainer for \"6d934b835f9d2c7abfe133db8821494b6aeacbb0ebec459e44b4837fd23b74ee\"" Feb 13 18:50:37.372794 systemd[1]: Started cri-containerd-67c77bbd58580814e41c48d2df6c2428d1a0945a40848ead43b91698a23e5994.scope - libcontainer container 67c77bbd58580814e41c48d2df6c2428d1a0945a40848ead43b91698a23e5994. Feb 13 18:50:37.375855 systemd[1]: Started cri-containerd-6d934b835f9d2c7abfe133db8821494b6aeacbb0ebec459e44b4837fd23b74ee.scope - libcontainer container 6d934b835f9d2c7abfe133db8821494b6aeacbb0ebec459e44b4837fd23b74ee. Feb 13 18:50:37.411487 containerd[1453]: time="2025-02-13T18:50:37.411324471Z" level=info msg="StartContainer for \"67c77bbd58580814e41c48d2df6c2428d1a0945a40848ead43b91698a23e5994\" returns successfully" Feb 13 18:50:37.411487 containerd[1453]: time="2025-02-13T18:50:37.411414270Z" level=info msg="StartContainer for \"6d934b835f9d2c7abfe133db8821494b6aeacbb0ebec459e44b4837fd23b74ee\" returns successfully" Feb 13 18:50:37.826271 kubelet[2500]: E0213 18:50:37.826123 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:37.831843 kubelet[2500]: E0213 18:50:37.831617 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:37.831964 kubelet[2500]: E0213 18:50:37.831659 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:37.836274 kubelet[2500]: I0213 18:50:37.836217 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dmjk6" podStartSLOduration=7.836199286 podStartE2EDuration="7.836199286s" podCreationTimestamp="2025-02-13 18:50:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:50:37.835816651 +0000 UTC m=+14.131023871" watchObservedRunningTime="2025-02-13 18:50:37.836199286 +0000 UTC m=+14.131406506" Feb 13 18:50:37.865953 kubelet[2500]: I0213 18:50:37.865885 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-58s8k" podStartSLOduration=7.8623597400000005 podStartE2EDuration="7.86235974s" podCreationTimestamp="2025-02-13 18:50:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:50:37.862281901 +0000 UTC m=+14.157489121" watchObservedRunningTime="2025-02-13 18:50:37.86235974 +0000 UTC m=+14.157566920" Feb 13 18:50:38.087815 systemd-networkd[1395]: flannel.1: Gained IPv6LL Feb 13 18:50:38.407805 systemd-networkd[1395]: veth4133e6b8: Gained IPv6LL Feb 13 18:50:38.838729 kubelet[2500]: E0213 18:50:38.835890 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:38.840169 kubelet[2500]: E0213 18:50:38.839954 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:38.983792 systemd-networkd[1395]: veth4d376159: Gained IPv6LL Feb 13 18:50:39.047785 systemd-networkd[1395]: cni0: Gained IPv6LL Feb 13 18:50:39.692578 kubelet[2500]: E0213 18:50:39.690529 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:39.834965 kubelet[2500]: E0213 18:50:39.834761 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:39.834965 kubelet[2500]: E0213 18:50:39.834845 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:50:48.216221 systemd[1]: Started sshd@5-10.0.0.31:22-10.0.0.1:38562.service - OpenSSH per-connection server daemon (10.0.0.1:38562). Feb 13 18:50:48.269232 sshd[3393]: Accepted publickey for core from 10.0.0.1 port 38562 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:50:48.269896 sshd-session[3393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:50:48.273666 systemd-logind[1437]: New session 6 of user core. Feb 13 18:50:48.282778 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 18:50:48.406990 sshd[3395]: Connection closed by 10.0.0.1 port 38562 Feb 13 18:50:48.407356 sshd-session[3393]: pam_unix(sshd:session): session closed for user core Feb 13 18:50:48.410616 systemd[1]: sshd@5-10.0.0.31:22-10.0.0.1:38562.service: Deactivated successfully. Feb 13 18:50:48.412386 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 18:50:48.413146 systemd-logind[1437]: Session 6 logged out. Waiting for processes to exit. Feb 13 18:50:48.414251 systemd-logind[1437]: Removed session 6. Feb 13 18:50:53.420827 systemd[1]: Started sshd@6-10.0.0.31:22-10.0.0.1:45450.service - OpenSSH per-connection server daemon (10.0.0.1:45450). Feb 13 18:50:53.464320 sshd[3430]: Accepted publickey for core from 10.0.0.1 port 45450 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:50:53.465396 sshd-session[3430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:50:53.470548 systemd-logind[1437]: New session 7 of user core. Feb 13 18:50:53.480762 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 18:50:53.588704 sshd[3432]: Connection closed by 10.0.0.1 port 45450 Feb 13 18:50:53.589058 sshd-session[3430]: pam_unix(sshd:session): session closed for user core Feb 13 18:50:53.592232 systemd[1]: sshd@6-10.0.0.31:22-10.0.0.1:45450.service: Deactivated successfully. Feb 13 18:50:53.595489 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 18:50:53.596661 systemd-logind[1437]: Session 7 logged out. Waiting for processes to exit. Feb 13 18:50:53.597761 systemd-logind[1437]: Removed session 7. Feb 13 18:50:58.599768 systemd[1]: Started sshd@7-10.0.0.31:22-10.0.0.1:45466.service - OpenSSH per-connection server daemon (10.0.0.1:45466). Feb 13 18:50:58.641276 sshd[3467]: Accepted publickey for core from 10.0.0.1 port 45466 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:50:58.642790 sshd-session[3467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:50:58.647531 systemd-logind[1437]: New session 8 of user core. Feb 13 18:50:58.656793 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 18:50:58.764528 sshd[3469]: Connection closed by 10.0.0.1 port 45466 Feb 13 18:50:58.764886 sshd-session[3467]: pam_unix(sshd:session): session closed for user core Feb 13 18:50:58.775025 systemd[1]: sshd@7-10.0.0.31:22-10.0.0.1:45466.service: Deactivated successfully. Feb 13 18:50:58.776501 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 18:50:58.777743 systemd-logind[1437]: Session 8 logged out. Waiting for processes to exit. Feb 13 18:50:58.784065 systemd[1]: Started sshd@8-10.0.0.31:22-10.0.0.1:45470.service - OpenSSH per-connection server daemon (10.0.0.1:45470). Feb 13 18:50:58.785354 systemd-logind[1437]: Removed session 8. Feb 13 18:50:58.818257 sshd[3482]: Accepted publickey for core from 10.0.0.1 port 45470 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:50:58.819454 sshd-session[3482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:50:58.823554 systemd-logind[1437]: New session 9 of user core. Feb 13 18:50:58.833824 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 18:50:58.979955 sshd[3484]: Connection closed by 10.0.0.1 port 45470 Feb 13 18:50:58.980216 sshd-session[3482]: pam_unix(sshd:session): session closed for user core Feb 13 18:50:58.988602 systemd[1]: sshd@8-10.0.0.31:22-10.0.0.1:45470.service: Deactivated successfully. Feb 13 18:50:58.991729 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 18:50:58.993376 systemd-logind[1437]: Session 9 logged out. Waiting for processes to exit. Feb 13 18:50:58.999920 systemd[1]: Started sshd@9-10.0.0.31:22-10.0.0.1:45472.service - OpenSSH per-connection server daemon (10.0.0.1:45472). Feb 13 18:50:59.002385 systemd-logind[1437]: Removed session 9. Feb 13 18:50:59.037949 sshd[3494]: Accepted publickey for core from 10.0.0.1 port 45472 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:50:59.039162 sshd-session[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:50:59.042847 systemd-logind[1437]: New session 10 of user core. Feb 13 18:50:59.055777 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 18:50:59.161498 sshd[3496]: Connection closed by 10.0.0.1 port 45472 Feb 13 18:50:59.161875 sshd-session[3494]: pam_unix(sshd:session): session closed for user core Feb 13 18:50:59.165441 systemd[1]: sshd@9-10.0.0.31:22-10.0.0.1:45472.service: Deactivated successfully. Feb 13 18:50:59.167100 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 18:50:59.167709 systemd-logind[1437]: Session 10 logged out. Waiting for processes to exit. Feb 13 18:50:59.168853 systemd-logind[1437]: Removed session 10. Feb 13 18:51:04.171971 systemd[1]: Started sshd@10-10.0.0.31:22-10.0.0.1:55540.service - OpenSSH per-connection server daemon (10.0.0.1:55540). Feb 13 18:51:04.210776 sshd[3533]: Accepted publickey for core from 10.0.0.1 port 55540 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:51:04.211920 sshd-session[3533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:51:04.215185 systemd-logind[1437]: New session 11 of user core. Feb 13 18:51:04.224786 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 18:51:04.332099 sshd[3535]: Connection closed by 10.0.0.1 port 55540 Feb 13 18:51:04.332442 sshd-session[3533]: pam_unix(sshd:session): session closed for user core Feb 13 18:51:04.342889 systemd[1]: sshd@10-10.0.0.31:22-10.0.0.1:55540.service: Deactivated successfully. Feb 13 18:51:04.344447 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 18:51:04.345706 systemd-logind[1437]: Session 11 logged out. Waiting for processes to exit. Feb 13 18:51:04.346813 systemd[1]: Started sshd@11-10.0.0.31:22-10.0.0.1:55544.service - OpenSSH per-connection server daemon (10.0.0.1:55544). Feb 13 18:51:04.347497 systemd-logind[1437]: Removed session 11. Feb 13 18:51:04.384099 sshd[3547]: Accepted publickey for core from 10.0.0.1 port 55544 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:51:04.385139 sshd-session[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:51:04.388757 systemd-logind[1437]: New session 12 of user core. Feb 13 18:51:04.395767 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 18:51:04.565673 sshd[3549]: Connection closed by 10.0.0.1 port 55544 Feb 13 18:51:04.565992 sshd-session[3547]: pam_unix(sshd:session): session closed for user core Feb 13 18:51:04.575879 systemd[1]: sshd@11-10.0.0.31:22-10.0.0.1:55544.service: Deactivated successfully. Feb 13 18:51:04.577732 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 18:51:04.579146 systemd-logind[1437]: Session 12 logged out. Waiting for processes to exit. Feb 13 18:51:04.587861 systemd[1]: Started sshd@12-10.0.0.31:22-10.0.0.1:55546.service - OpenSSH per-connection server daemon (10.0.0.1:55546). Feb 13 18:51:04.589004 systemd-logind[1437]: Removed session 12. Feb 13 18:51:04.623308 sshd[3560]: Accepted publickey for core from 10.0.0.1 port 55546 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:51:04.624408 sshd-session[3560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:51:04.627691 systemd-logind[1437]: New session 13 of user core. Feb 13 18:51:04.638759 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 18:51:05.161050 sshd[3562]: Connection closed by 10.0.0.1 port 55546 Feb 13 18:51:05.161674 sshd-session[3560]: pam_unix(sshd:session): session closed for user core Feb 13 18:51:05.172165 systemd[1]: sshd@12-10.0.0.31:22-10.0.0.1:55546.service: Deactivated successfully. Feb 13 18:51:05.178171 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 18:51:05.180517 systemd-logind[1437]: Session 13 logged out. Waiting for processes to exit. Feb 13 18:51:05.191901 systemd[1]: Started sshd@13-10.0.0.31:22-10.0.0.1:55550.service - OpenSSH per-connection server daemon (10.0.0.1:55550). Feb 13 18:51:05.193147 systemd-logind[1437]: Removed session 13. Feb 13 18:51:05.226887 sshd[3580]: Accepted publickey for core from 10.0.0.1 port 55550 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:51:05.228009 sshd-session[3580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:51:05.231650 systemd-logind[1437]: New session 14 of user core. Feb 13 18:51:05.250784 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 18:51:05.452773 sshd[3582]: Connection closed by 10.0.0.1 port 55550 Feb 13 18:51:05.452903 sshd-session[3580]: pam_unix(sshd:session): session closed for user core Feb 13 18:51:05.461918 systemd[1]: sshd@13-10.0.0.31:22-10.0.0.1:55550.service: Deactivated successfully. Feb 13 18:51:05.463761 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 18:51:05.466123 systemd-logind[1437]: Session 14 logged out. Waiting for processes to exit. Feb 13 18:51:05.467894 systemd[1]: Started sshd@14-10.0.0.31:22-10.0.0.1:55562.service - OpenSSH per-connection server daemon (10.0.0.1:55562). Feb 13 18:51:05.471240 systemd-logind[1437]: Removed session 14. Feb 13 18:51:05.505923 sshd[3593]: Accepted publickey for core from 10.0.0.1 port 55562 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:51:05.507119 sshd-session[3593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:51:05.510605 systemd-logind[1437]: New session 15 of user core. Feb 13 18:51:05.520784 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 18:51:05.624523 sshd[3595]: Connection closed by 10.0.0.1 port 55562 Feb 13 18:51:05.624860 sshd-session[3593]: pam_unix(sshd:session): session closed for user core Feb 13 18:51:05.627947 systemd[1]: sshd@14-10.0.0.31:22-10.0.0.1:55562.service: Deactivated successfully. Feb 13 18:51:05.630096 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 18:51:05.630812 systemd-logind[1437]: Session 15 logged out. Waiting for processes to exit. Feb 13 18:51:05.631560 systemd-logind[1437]: Removed session 15. Feb 13 18:51:10.635047 systemd[1]: Started sshd@15-10.0.0.31:22-10.0.0.1:55572.service - OpenSSH per-connection server daemon (10.0.0.1:55572). Feb 13 18:51:10.672674 sshd[3630]: Accepted publickey for core from 10.0.0.1 port 55572 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:51:10.674030 sshd-session[3630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:51:10.677544 systemd-logind[1437]: New session 16 of user core. Feb 13 18:51:10.683867 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 18:51:10.790670 sshd[3632]: Connection closed by 10.0.0.1 port 55572 Feb 13 18:51:10.790973 sshd-session[3630]: pam_unix(sshd:session): session closed for user core Feb 13 18:51:10.793971 systemd[1]: sshd@15-10.0.0.31:22-10.0.0.1:55572.service: Deactivated successfully. Feb 13 18:51:10.796222 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 18:51:10.797156 systemd-logind[1437]: Session 16 logged out. Waiting for processes to exit. Feb 13 18:51:10.797899 systemd-logind[1437]: Removed session 16. Feb 13 18:51:15.806100 systemd[1]: Started sshd@16-10.0.0.31:22-10.0.0.1:44704.service - OpenSSH per-connection server daemon (10.0.0.1:44704). Feb 13 18:51:15.844063 sshd[3665]: Accepted publickey for core from 10.0.0.1 port 44704 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:51:15.845419 sshd-session[3665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:51:15.849445 systemd-logind[1437]: New session 17 of user core. Feb 13 18:51:15.857778 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 18:51:15.964574 sshd[3667]: Connection closed by 10.0.0.1 port 44704 Feb 13 18:51:15.964927 sshd-session[3665]: pam_unix(sshd:session): session closed for user core Feb 13 18:51:15.968058 systemd[1]: sshd@16-10.0.0.31:22-10.0.0.1:44704.service: Deactivated successfully. Feb 13 18:51:15.970853 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 18:51:15.972196 systemd-logind[1437]: Session 17 logged out. Waiting for processes to exit. Feb 13 18:51:15.973418 systemd-logind[1437]: Removed session 17. Feb 13 18:51:20.977765 systemd[1]: Started sshd@17-10.0.0.31:22-10.0.0.1:44720.service - OpenSSH per-connection server daemon (10.0.0.1:44720). Feb 13 18:51:21.016892 sshd[3703]: Accepted publickey for core from 10.0.0.1 port 44720 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:51:21.018172 sshd-session[3703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:51:21.022558 systemd-logind[1437]: New session 18 of user core. Feb 13 18:51:21.035815 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 18:51:21.154727 sshd[3705]: Connection closed by 10.0.0.1 port 44720 Feb 13 18:51:21.155071 sshd-session[3703]: pam_unix(sshd:session): session closed for user core Feb 13 18:51:21.157485 systemd[1]: sshd@17-10.0.0.31:22-10.0.0.1:44720.service: Deactivated successfully. Feb 13 18:51:21.160314 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 18:51:21.162808 systemd-logind[1437]: Session 18 logged out. Waiting for processes to exit. Feb 13 18:51:21.163956 systemd-logind[1437]: Removed session 18.