Feb 13 18:48:47.872269 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 18:48:47.872290 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:29:42 -00 2025 Feb 13 18:48:47.872299 kernel: KASLR enabled Feb 13 18:48:47.872305 kernel: efi: EFI v2.7 by EDK II Feb 13 18:48:47.872310 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Feb 13 18:48:47.872316 kernel: random: crng init done Feb 13 18:48:47.872322 kernel: secureboot: Secure boot disabled Feb 13 18:48:47.872328 kernel: ACPI: Early table checksum verification disabled Feb 13 18:48:47.872334 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 18:48:47.872340 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 18:48:47.872346 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:48:47.872352 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:48:47.872358 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:48:47.872364 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:48:47.872371 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:48:47.872378 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:48:47.872385 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:48:47.872391 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:48:47.872397 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 18:48:47.872403 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 18:48:47.872409 kernel: NUMA: Failed to initialise from firmware Feb 13 18:48:47.872415 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 18:48:47.872421 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 18:48:47.872427 kernel: Zone ranges: Feb 13 18:48:47.872433 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 18:48:47.872440 kernel: DMA32 empty Feb 13 18:48:47.872446 kernel: Normal empty Feb 13 18:48:47.872452 kernel: Movable zone start for each node Feb 13 18:48:47.872458 kernel: Early memory node ranges Feb 13 18:48:47.872464 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Feb 13 18:48:47.872470 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Feb 13 18:48:47.872476 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Feb 13 18:48:47.872482 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 18:48:47.872488 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 18:48:47.872494 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 18:48:47.872499 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 18:48:47.872505 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 18:48:47.872512 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 18:48:47.872518 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 18:48:47.872525 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 18:48:47.872533 kernel: psci: probing for conduit method from ACPI. Feb 13 18:48:47.872540 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 18:48:47.872546 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 18:48:47.872554 kernel: psci: Trusted OS migration not required Feb 13 18:48:47.872560 kernel: psci: SMC Calling Convention v1.1 Feb 13 18:48:47.872567 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 18:48:47.872573 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 18:48:47.872579 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 18:48:47.872586 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 18:48:47.872593 kernel: Detected PIPT I-cache on CPU0 Feb 13 18:48:47.872599 kernel: CPU features: detected: GIC system register CPU interface Feb 13 18:48:47.872605 kernel: CPU features: detected: Hardware dirty bit management Feb 13 18:48:47.872612 kernel: CPU features: detected: Spectre-v4 Feb 13 18:48:47.872619 kernel: CPU features: detected: Spectre-BHB Feb 13 18:48:47.872626 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 18:48:47.872632 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 18:48:47.872638 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 18:48:47.872645 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 18:48:47.872651 kernel: alternatives: applying boot alternatives Feb 13 18:48:47.872658 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 18:48:47.872665 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 18:48:47.872672 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 18:48:47.872678 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 18:48:47.872685 kernel: Fallback order for Node 0: 0 Feb 13 18:48:47.872692 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 18:48:47.872699 kernel: Policy zone: DMA Feb 13 18:48:47.872705 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 18:48:47.872711 kernel: software IO TLB: area num 4. Feb 13 18:48:47.872718 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 18:48:47.872725 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved) Feb 13 18:48:47.872731 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 18:48:47.872738 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 18:48:47.872744 kernel: rcu: RCU event tracing is enabled. Feb 13 18:48:47.872751 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 18:48:47.872758 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 18:48:47.872764 kernel: Tracing variant of Tasks RCU enabled. Feb 13 18:48:47.872772 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 18:48:47.872790 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 18:48:47.872797 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 18:48:47.872803 kernel: GICv3: 256 SPIs implemented Feb 13 18:48:47.872809 kernel: GICv3: 0 Extended SPIs implemented Feb 13 18:48:47.872822 kernel: Root IRQ handler: gic_handle_irq Feb 13 18:48:47.872828 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 18:48:47.872835 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 18:48:47.872841 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 18:48:47.872848 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 18:48:47.872854 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 18:48:47.872863 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 18:48:47.872870 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 18:48:47.872876 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 18:48:47.872883 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 18:48:47.872889 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 18:48:47.872895 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 18:48:47.872902 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 18:48:47.872908 kernel: arm-pv: using stolen time PV Feb 13 18:48:47.872915 kernel: Console: colour dummy device 80x25 Feb 13 18:48:47.872921 kernel: ACPI: Core revision 20230628 Feb 13 18:48:47.872928 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 18:48:47.872936 kernel: pid_max: default: 32768 minimum: 301 Feb 13 18:48:47.872943 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 18:48:47.872949 kernel: landlock: Up and running. Feb 13 18:48:47.872956 kernel: SELinux: Initializing. Feb 13 18:48:47.872962 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 18:48:47.872969 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 18:48:47.872976 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 18:48:47.872982 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 18:48:47.872989 kernel: rcu: Hierarchical SRCU implementation. Feb 13 18:48:47.872997 kernel: rcu: Max phase no-delay instances is 400. Feb 13 18:48:47.873003 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 18:48:47.873010 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 18:48:47.873016 kernel: Remapping and enabling EFI services. Feb 13 18:48:47.873023 kernel: smp: Bringing up secondary CPUs ... Feb 13 18:48:47.873029 kernel: Detected PIPT I-cache on CPU1 Feb 13 18:48:47.873036 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 18:48:47.873043 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 18:48:47.873049 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 18:48:47.873057 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 18:48:47.873064 kernel: Detected PIPT I-cache on CPU2 Feb 13 18:48:47.873075 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 18:48:47.873083 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 18:48:47.873090 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 18:48:47.873097 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 18:48:47.873104 kernel: Detected PIPT I-cache on CPU3 Feb 13 18:48:47.873111 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 18:48:47.873118 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 18:48:47.873126 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 18:48:47.873133 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 18:48:47.873140 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 18:48:47.873147 kernel: SMP: Total of 4 processors activated. Feb 13 18:48:47.873154 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 18:48:47.873161 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 18:48:47.873167 kernel: CPU features: detected: Common not Private translations Feb 13 18:48:47.873175 kernel: CPU features: detected: CRC32 instructions Feb 13 18:48:47.873183 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 18:48:47.873190 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 18:48:47.873197 kernel: CPU features: detected: LSE atomic instructions Feb 13 18:48:47.873204 kernel: CPU features: detected: Privileged Access Never Feb 13 18:48:47.873211 kernel: CPU features: detected: RAS Extension Support Feb 13 18:48:47.873218 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 18:48:47.873225 kernel: CPU: All CPU(s) started at EL1 Feb 13 18:48:47.873232 kernel: alternatives: applying system-wide alternatives Feb 13 18:48:47.873238 kernel: devtmpfs: initialized Feb 13 18:48:47.873245 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 18:48:47.873254 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 18:48:47.873261 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 18:48:47.873267 kernel: SMBIOS 3.0.0 present. Feb 13 18:48:47.873274 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 18:48:47.873281 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 18:48:47.873288 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 18:48:47.873295 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 18:48:47.873302 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 18:48:47.873310 kernel: audit: initializing netlink subsys (disabled) Feb 13 18:48:47.873317 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Feb 13 18:48:47.873324 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 18:48:47.873331 kernel: cpuidle: using governor menu Feb 13 18:48:47.873338 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 18:48:47.873345 kernel: ASID allocator initialised with 32768 entries Feb 13 18:48:47.873352 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 18:48:47.873359 kernel: Serial: AMBA PL011 UART driver Feb 13 18:48:47.873366 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 18:48:47.873372 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 18:48:47.873381 kernel: Modules: 508880 pages in range for PLT usage Feb 13 18:48:47.873388 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 18:48:47.873394 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 18:48:47.873401 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 18:48:47.873409 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 18:48:47.873415 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 18:48:47.873422 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 18:48:47.873429 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 18:48:47.873436 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 18:48:47.873444 kernel: ACPI: Added _OSI(Module Device) Feb 13 18:48:47.873451 kernel: ACPI: Added _OSI(Processor Device) Feb 13 18:48:47.873458 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 18:48:47.873465 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 18:48:47.873471 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 18:48:47.873478 kernel: ACPI: Interpreter enabled Feb 13 18:48:47.873485 kernel: ACPI: Using GIC for interrupt routing Feb 13 18:48:47.873492 kernel: ACPI: MCFG table detected, 1 entries Feb 13 18:48:47.873499 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 18:48:47.873507 kernel: printk: console [ttyAMA0] enabled Feb 13 18:48:47.873514 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 18:48:47.873644 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 18:48:47.873714 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 18:48:47.873776 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 18:48:47.873867 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 18:48:47.873933 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 18:48:47.873945 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 18:48:47.873952 kernel: PCI host bridge to bus 0000:00 Feb 13 18:48:47.874020 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 18:48:47.874077 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 18:48:47.874132 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 18:48:47.874187 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 18:48:47.874263 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 18:48:47.874342 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 18:48:47.874405 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 18:48:47.874468 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 18:48:47.874529 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 18:48:47.874592 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 18:48:47.874654 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 18:48:47.874716 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 18:48:47.874776 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 18:48:47.874870 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 18:48:47.874929 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 18:48:47.874938 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 18:48:47.874945 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 18:48:47.874952 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 18:48:47.874959 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 18:48:47.874969 kernel: iommu: Default domain type: Translated Feb 13 18:48:47.874977 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 18:48:47.874984 kernel: efivars: Registered efivars operations Feb 13 18:48:47.874990 kernel: vgaarb: loaded Feb 13 18:48:47.874997 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 18:48:47.875004 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 18:48:47.875012 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 18:48:47.875018 kernel: pnp: PnP ACPI init Feb 13 18:48:47.875091 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 18:48:47.875103 kernel: pnp: PnP ACPI: found 1 devices Feb 13 18:48:47.875110 kernel: NET: Registered PF_INET protocol family Feb 13 18:48:47.875117 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 18:48:47.875124 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 18:48:47.875131 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 18:48:47.875138 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 18:48:47.875145 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 18:48:47.875152 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 18:48:47.875159 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 18:48:47.875168 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 18:48:47.875176 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 18:48:47.875182 kernel: PCI: CLS 0 bytes, default 64 Feb 13 18:48:47.875189 kernel: kvm [1]: HYP mode not available Feb 13 18:48:47.875196 kernel: Initialise system trusted keyrings Feb 13 18:48:47.875203 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 18:48:47.875210 kernel: Key type asymmetric registered Feb 13 18:48:47.875217 kernel: Asymmetric key parser 'x509' registered Feb 13 18:48:47.875224 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 18:48:47.875232 kernel: io scheduler mq-deadline registered Feb 13 18:48:47.875239 kernel: io scheduler kyber registered Feb 13 18:48:47.875246 kernel: io scheduler bfq registered Feb 13 18:48:47.875253 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 18:48:47.875260 kernel: ACPI: button: Power Button [PWRB] Feb 13 18:48:47.875267 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 18:48:47.875334 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 18:48:47.875343 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 18:48:47.875350 kernel: thunder_xcv, ver 1.0 Feb 13 18:48:47.875359 kernel: thunder_bgx, ver 1.0 Feb 13 18:48:47.875366 kernel: nicpf, ver 1.0 Feb 13 18:48:47.875373 kernel: nicvf, ver 1.0 Feb 13 18:48:47.875447 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 18:48:47.875509 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T18:48:47 UTC (1739472527) Feb 13 18:48:47.875519 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 18:48:47.875526 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 18:48:47.875533 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 18:48:47.875542 kernel: watchdog: Hard watchdog permanently disabled Feb 13 18:48:47.875549 kernel: NET: Registered PF_INET6 protocol family Feb 13 18:48:47.875556 kernel: Segment Routing with IPv6 Feb 13 18:48:47.875563 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 18:48:47.875570 kernel: NET: Registered PF_PACKET protocol family Feb 13 18:48:47.875577 kernel: Key type dns_resolver registered Feb 13 18:48:47.875584 kernel: registered taskstats version 1 Feb 13 18:48:47.875591 kernel: Loading compiled-in X.509 certificates Feb 13 18:48:47.875598 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 987d382bd4f498c8030ef29b348ef5d6fcf1f0e3' Feb 13 18:48:47.875606 kernel: Key type .fscrypt registered Feb 13 18:48:47.875613 kernel: Key type fscrypt-provisioning registered Feb 13 18:48:47.875620 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 18:48:47.875627 kernel: ima: Allocated hash algorithm: sha1 Feb 13 18:48:47.875634 kernel: ima: No architecture policies found Feb 13 18:48:47.875641 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 18:48:47.875648 kernel: clk: Disabling unused clocks Feb 13 18:48:47.875655 kernel: Freeing unused kernel memory: 39936K Feb 13 18:48:47.875661 kernel: Run /init as init process Feb 13 18:48:47.875670 kernel: with arguments: Feb 13 18:48:47.875676 kernel: /init Feb 13 18:48:47.875683 kernel: with environment: Feb 13 18:48:47.875690 kernel: HOME=/ Feb 13 18:48:47.875697 kernel: TERM=linux Feb 13 18:48:47.875703 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 18:48:47.875712 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 18:48:47.875721 systemd[1]: Detected virtualization kvm. Feb 13 18:48:47.875730 systemd[1]: Detected architecture arm64. Feb 13 18:48:47.875737 systemd[1]: Running in initrd. Feb 13 18:48:47.875745 systemd[1]: No hostname configured, using default hostname. Feb 13 18:48:47.875752 systemd[1]: Hostname set to <localhost>. Feb 13 18:48:47.875760 systemd[1]: Initializing machine ID from VM UUID. Feb 13 18:48:47.875767 systemd[1]: Queued start job for default target initrd.target. Feb 13 18:48:47.875775 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:48:47.875853 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:48:47.875865 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 18:48:47.875872 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 18:48:47.875880 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 18:48:47.875888 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 18:48:47.875897 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 18:48:47.875905 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 18:48:47.875914 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:48:47.875922 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:48:47.875929 systemd[1]: Reached target paths.target - Path Units. Feb 13 18:48:47.875952 systemd[1]: Reached target slices.target - Slice Units. Feb 13 18:48:47.875960 systemd[1]: Reached target swap.target - Swaps. Feb 13 18:48:47.875968 systemd[1]: Reached target timers.target - Timer Units. Feb 13 18:48:47.875975 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 18:48:47.875983 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 18:48:47.875990 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 18:48:47.876000 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 18:48:47.876008 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:48:47.876015 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 18:48:47.876023 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:48:47.876031 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 18:48:47.876039 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 18:48:47.876046 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 18:48:47.876054 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 18:48:47.876063 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 18:48:47.876070 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 18:48:47.876078 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 18:48:47.876086 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:48:47.876093 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 18:48:47.876101 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:48:47.876108 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 18:48:47.876118 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 18:48:47.876147 systemd-journald[239]: Collecting audit messages is disabled. Feb 13 18:48:47.876167 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 18:48:47.876176 systemd-journald[239]: Journal started Feb 13 18:48:47.876199 systemd-journald[239]: Runtime Journal (/run/log/journal/c43d5f76d6f642d2b47ac003b86d8a7c) is 5.9M, max 47.3M, 41.4M free. Feb 13 18:48:47.884923 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 18:48:47.884955 kernel: Bridge firewalling registered Feb 13 18:48:47.867854 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 18:48:47.880898 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 18:48:47.888536 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 18:48:47.888554 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 18:48:47.889573 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 18:48:47.890561 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:48:47.891942 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:48:47.903972 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:48:47.905485 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 18:48:47.907965 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 18:48:47.915259 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:48:47.918863 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:48:47.922905 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 18:48:47.923751 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:48:47.925949 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 18:48:47.938503 dracut-cmdline[278]: dracut-dracut-053 Feb 13 18:48:47.940883 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 18:48:47.949695 systemd-resolved[276]: Positive Trust Anchors: Feb 13 18:48:47.949713 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 18:48:47.949744 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 18:48:47.954358 systemd-resolved[276]: Defaulting to hostname 'linux'. Feb 13 18:48:47.955285 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 18:48:47.956617 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:48:48.007816 kernel: SCSI subsystem initialized Feb 13 18:48:48.011801 kernel: Loading iSCSI transport class v2.0-870. Feb 13 18:48:48.020827 kernel: iscsi: registered transport (tcp) Feb 13 18:48:48.031815 kernel: iscsi: registered transport (qla4xxx) Feb 13 18:48:48.031835 kernel: QLogic iSCSI HBA Driver Feb 13 18:48:48.072940 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 18:48:48.081952 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 18:48:48.099898 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 18:48:48.099947 kernel: device-mapper: uevent: version 1.0.3 Feb 13 18:48:48.100826 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 18:48:48.145805 kernel: raid6: neonx8 gen() 15747 MB/s Feb 13 18:48:48.162797 kernel: raid6: neonx4 gen() 15789 MB/s Feb 13 18:48:48.179798 kernel: raid6: neonx2 gen() 13196 MB/s Feb 13 18:48:48.196805 kernel: raid6: neonx1 gen() 10475 MB/s Feb 13 18:48:48.213796 kernel: raid6: int64x8 gen() 6791 MB/s Feb 13 18:48:48.230803 kernel: raid6: int64x4 gen() 7347 MB/s Feb 13 18:48:48.247803 kernel: raid6: int64x2 gen() 6109 MB/s Feb 13 18:48:48.264802 kernel: raid6: int64x1 gen() 5058 MB/s Feb 13 18:48:48.264831 kernel: raid6: using algorithm neonx4 gen() 15789 MB/s Feb 13 18:48:48.281809 kernel: raid6: .... xor() 12349 MB/s, rmw enabled Feb 13 18:48:48.281844 kernel: raid6: using neon recovery algorithm Feb 13 18:48:48.287095 kernel: xor: measuring software checksum speed Feb 13 18:48:48.287126 kernel: 8regs : 21618 MB/sec Feb 13 18:48:48.287145 kernel: 32regs : 21710 MB/sec Feb 13 18:48:48.289800 kernel: arm64_neon : 1796 MB/sec Feb 13 18:48:48.289823 kernel: xor: using function: 32regs (21710 MB/sec) Feb 13 18:48:48.338820 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 18:48:48.350827 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 18:48:48.367004 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:48:48.378230 systemd-udevd[462]: Using default interface naming scheme 'v255'. Feb 13 18:48:48.381273 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:48:48.383610 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 18:48:48.397474 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Feb 13 18:48:48.422614 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 18:48:48.430964 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 18:48:48.468844 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:48:48.479115 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 18:48:48.493170 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 18:48:48.494178 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 18:48:48.496057 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:48:48.496849 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 18:48:48.502965 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 18:48:48.516221 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 18:48:48.521790 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 18:48:48.521911 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 18:48:48.521923 kernel: GPT:9289727 != 19775487 Feb 13 18:48:48.521932 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 18:48:48.521942 kernel: GPT:9289727 != 19775487 Feb 13 18:48:48.521957 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 18:48:48.521966 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 18:48:48.518021 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 18:48:48.523452 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 18:48:48.523563 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:48:48.525917 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:48:48.526652 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 18:48:48.526772 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:48:48.528856 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:48:48.543116 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (513) Feb 13 18:48:48.543173 kernel: BTRFS: device fsid 55beb02a-1d0d-4a3e-812c-2737f0301ec8 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (515) Feb 13 18:48:48.542503 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:48:48.550954 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 18:48:48.557406 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:48:48.562733 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 18:48:48.569927 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 18:48:48.573342 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 18:48:48.574279 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 18:48:48.590994 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 18:48:48.592519 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:48:48.597068 disk-uuid[552]: Primary Header is updated. Feb 13 18:48:48.597068 disk-uuid[552]: Secondary Entries is updated. Feb 13 18:48:48.597068 disk-uuid[552]: Secondary Header is updated. Feb 13 18:48:48.602364 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 18:48:48.613092 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:48:49.610134 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 18:48:49.610187 disk-uuid[553]: The operation has completed successfully. Feb 13 18:48:49.631730 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 18:48:49.631866 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 18:48:49.650945 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 18:48:49.653804 sh[572]: Success Feb 13 18:48:49.671023 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 18:48:49.708159 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 18:48:49.709640 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 18:48:49.710396 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 18:48:49.721288 kernel: BTRFS info (device dm-0): first mount of filesystem 55beb02a-1d0d-4a3e-812c-2737f0301ec8 Feb 13 18:48:49.721323 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:48:49.721333 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 18:48:49.723139 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 18:48:49.723158 kernel: BTRFS info (device dm-0): using free space tree Feb 13 18:48:49.727231 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 18:48:49.728047 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 18:48:49.736944 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 18:48:49.738249 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 18:48:49.745281 kernel: BTRFS info (device vda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:48:49.745323 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:48:49.745333 kernel: BTRFS info (device vda6): using free space tree Feb 13 18:48:49.747909 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 18:48:49.754612 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 18:48:49.756827 kernel: BTRFS info (device vda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:48:49.761321 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 18:48:49.767995 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 18:48:49.867849 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 18:48:49.883977 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 18:48:49.887268 ignition[662]: Ignition 2.20.0 Feb 13 18:48:49.887278 ignition[662]: Stage: fetch-offline Feb 13 18:48:49.887316 ignition[662]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:48:49.887324 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 18:48:49.887482 ignition[662]: parsed url from cmdline: "" Feb 13 18:48:49.887485 ignition[662]: no config URL provided Feb 13 18:48:49.887490 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 18:48:49.887497 ignition[662]: no config at "/usr/lib/ignition/user.ign" Feb 13 18:48:49.887521 ignition[662]: op(1): [started] loading QEMU firmware config module Feb 13 18:48:49.887526 ignition[662]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 18:48:49.896938 ignition[662]: op(1): [finished] loading QEMU firmware config module Feb 13 18:48:49.908499 systemd-networkd[763]: lo: Link UP Feb 13 18:48:49.908512 systemd-networkd[763]: lo: Gained carrier Feb 13 18:48:49.909331 systemd-networkd[763]: Enumeration completed Feb 13 18:48:49.909578 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 18:48:49.909726 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:48:49.909729 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 18:48:49.910579 systemd-networkd[763]: eth0: Link UP Feb 13 18:48:49.910582 systemd-networkd[763]: eth0: Gained carrier Feb 13 18:48:49.910589 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:48:49.911441 systemd[1]: Reached target network.target - Network. Feb 13 18:48:49.925059 ignition[662]: parsing config with SHA512: 6f0992eb59d60f8fc65b02cc4e3a3eb1850e9f416be00491bd72c563d2f486a9c4077f9922f394caa1423bcb6043bf2e5557386b1ea06b41dbc3df319771f9d2 Feb 13 18:48:49.926831 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 18:48:49.930592 unknown[662]: fetched base config from "system" Feb 13 18:48:49.931155 unknown[662]: fetched user config from "qemu" Feb 13 18:48:49.931674 ignition[662]: fetch-offline: fetch-offline passed Feb 13 18:48:49.931759 ignition[662]: Ignition finished successfully Feb 13 18:48:49.932832 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 18:48:49.934309 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 18:48:49.939953 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 18:48:49.949958 ignition[770]: Ignition 2.20.0 Feb 13 18:48:49.949967 ignition[770]: Stage: kargs Feb 13 18:48:49.950125 ignition[770]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:48:49.950135 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 18:48:49.950948 ignition[770]: kargs: kargs passed Feb 13 18:48:49.950991 ignition[770]: Ignition finished successfully Feb 13 18:48:49.953462 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 18:48:49.962927 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 18:48:49.972689 ignition[780]: Ignition 2.20.0 Feb 13 18:48:49.972700 ignition[780]: Stage: disks Feb 13 18:48:49.972889 ignition[780]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:48:49.972899 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 18:48:49.973706 ignition[780]: disks: disks passed Feb 13 18:48:49.973749 ignition[780]: Ignition finished successfully Feb 13 18:48:49.975866 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 18:48:49.977246 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 18:48:49.978400 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 18:48:49.980034 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 18:48:49.981429 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 18:48:49.982666 systemd[1]: Reached target basic.target - Basic System. Feb 13 18:48:49.993943 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 18:48:50.004499 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 18:48:50.007921 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 18:48:50.009649 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 18:48:50.052590 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 18:48:50.053738 kernel: EXT4-fs (vda9): mounted filesystem 005a6458-8fd3-46f1-ab43-85ef18df7ccd r/w with ordered data mode. Quota mode: none. Feb 13 18:48:50.053642 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 18:48:50.063884 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 18:48:50.065697 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 18:48:50.066498 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 18:48:50.066533 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 18:48:50.066554 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 18:48:50.071484 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 18:48:50.072867 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 18:48:50.077872 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (798) Feb 13 18:48:50.077907 kernel: BTRFS info (device vda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:48:50.077924 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:48:50.077934 kernel: BTRFS info (device vda6): using free space tree Feb 13 18:48:50.080832 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 18:48:50.082237 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 18:48:50.115149 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 18:48:50.118916 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Feb 13 18:48:50.122744 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 18:48:50.126413 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 18:48:50.199430 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 18:48:50.213956 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 18:48:50.217077 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 18:48:50.219861 kernel: BTRFS info (device vda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:48:50.238257 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 18:48:50.239926 ignition[914]: INFO : Ignition 2.20.0 Feb 13 18:48:50.239926 ignition[914]: INFO : Stage: mount Feb 13 18:48:50.239926 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:48:50.239926 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 18:48:50.244733 ignition[914]: INFO : mount: mount passed Feb 13 18:48:50.244733 ignition[914]: INFO : Ignition finished successfully Feb 13 18:48:50.241208 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 18:48:50.257939 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 18:48:50.720708 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 18:48:50.736986 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 18:48:50.743366 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927) Feb 13 18:48:50.743399 kernel: BTRFS info (device vda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:48:50.743410 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:48:50.744797 kernel: BTRFS info (device vda6): using free space tree Feb 13 18:48:50.746811 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 18:48:50.747678 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 18:48:50.766443 ignition[944]: INFO : Ignition 2.20.0 Feb 13 18:48:50.766443 ignition[944]: INFO : Stage: files Feb 13 18:48:50.767708 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:48:50.767708 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 18:48:50.767708 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Feb 13 18:48:50.770294 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 18:48:50.770294 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 18:48:50.770294 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 18:48:50.770294 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 18:48:50.774250 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 18:48:50.774250 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 18:48:50.774250 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 18:48:50.770617 unknown[944]: wrote ssh authorized keys file for user: core Feb 13 18:48:50.826921 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 18:48:51.018080 systemd-networkd[763]: eth0: Gained IPv6LL Feb 13 18:48:51.023184 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 18:48:51.025560 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 18:48:51.025560 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 18:48:51.025560 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 18:48:51.025560 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 18:48:51.025560 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 18:48:51.025560 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 18:48:51.025560 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 18:48:51.025560 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 18:48:51.025560 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 18:48:51.025560 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 18:48:51.025560 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 18:48:51.025560 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 18:48:51.025560 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 18:48:51.025560 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 18:48:51.404184 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 18:48:52.004010 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 18:48:52.004010 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 18:48:52.006878 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 18:48:52.006878 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 18:48:52.006878 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 18:48:52.006878 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 18:48:52.006878 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 18:48:52.006878 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 18:48:52.006878 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 18:48:52.006878 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 18:48:52.029978 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 18:48:52.033377 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 18:48:52.035315 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 18:48:52.035315 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 18:48:52.035315 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 18:48:52.035315 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 18:48:52.035315 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 18:48:52.035315 ignition[944]: INFO : files: files passed Feb 13 18:48:52.035315 ignition[944]: INFO : Ignition finished successfully Feb 13 18:48:52.036120 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 18:48:52.043911 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 18:48:52.046975 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 18:48:52.047992 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 18:48:52.048070 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 18:48:52.053427 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 18:48:52.055673 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:48:52.055673 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:48:52.057915 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:48:52.058814 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 18:48:52.060145 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 18:48:52.067988 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 18:48:52.085818 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 18:48:52.085914 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 18:48:52.087488 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 18:48:52.088740 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 18:48:52.090193 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 18:48:52.090862 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 18:48:52.104757 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 18:48:52.111979 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 18:48:52.119975 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:48:52.120893 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:48:52.122424 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 18:48:52.123674 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 18:48:52.123789 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 18:48:52.125644 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 18:48:52.127079 systemd[1]: Stopped target basic.target - Basic System. Feb 13 18:48:52.128257 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 18:48:52.129508 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 18:48:52.130994 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 18:48:52.132405 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 18:48:52.133722 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 18:48:52.135136 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 18:48:52.136566 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 18:48:52.137810 systemd[1]: Stopped target swap.target - Swaps. Feb 13 18:48:52.138952 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 18:48:52.139057 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 18:48:52.140726 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:48:52.142268 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:48:52.143702 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 18:48:52.146866 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:48:52.147815 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 18:48:52.147922 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 18:48:52.149981 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 18:48:52.150092 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 18:48:52.151533 systemd[1]: Stopped target paths.target - Path Units. Feb 13 18:48:52.152774 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 18:48:52.154806 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:48:52.157049 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 18:48:52.158023 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 18:48:52.159446 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 18:48:52.159526 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 18:48:52.160881 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 18:48:52.160951 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 18:48:52.162416 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 18:48:52.162527 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 18:48:52.164047 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 18:48:52.164149 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 18:48:52.174957 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 18:48:52.175828 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 18:48:52.175956 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:48:52.178752 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 18:48:52.179577 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 18:48:52.179690 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:48:52.180992 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 18:48:52.181079 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 18:48:52.186970 ignition[1000]: INFO : Ignition 2.20.0 Feb 13 18:48:52.186970 ignition[1000]: INFO : Stage: umount Feb 13 18:48:52.188364 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:48:52.188364 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 18:48:52.188364 ignition[1000]: INFO : umount: umount passed Feb 13 18:48:52.188364 ignition[1000]: INFO : Ignition finished successfully Feb 13 18:48:52.187036 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 18:48:52.188152 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 18:48:52.189532 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 18:48:52.189610 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 18:48:52.191027 systemd[1]: Stopped target network.target - Network. Feb 13 18:48:52.192054 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 18:48:52.192109 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 18:48:52.193503 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 18:48:52.193545 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 18:48:52.195007 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 18:48:52.195053 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 18:48:52.196257 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 18:48:52.196292 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 18:48:52.197811 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 18:48:52.199136 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 18:48:52.201454 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 18:48:52.203025 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 18:48:52.203122 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 18:48:52.203839 systemd-networkd[763]: eth0: DHCPv6 lease lost Feb 13 18:48:52.205722 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 18:48:52.205811 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:48:52.208045 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 18:48:52.208152 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 18:48:52.210145 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 18:48:52.210201 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:48:52.217941 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 18:48:52.219241 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 18:48:52.219301 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 18:48:52.221042 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 18:48:52.221086 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:48:52.222575 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 18:48:52.222620 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 18:48:52.224398 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:48:52.235099 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 18:48:52.235202 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 18:48:52.242471 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 18:48:52.242599 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:48:52.244712 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 18:48:52.244749 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 18:48:52.246256 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 18:48:52.246286 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:48:52.247739 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 18:48:52.247838 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 18:48:52.250079 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 18:48:52.250127 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 18:48:52.252435 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 18:48:52.252483 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:48:52.263003 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 18:48:52.264050 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 18:48:52.264106 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:48:52.265922 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 18:48:52.265963 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:48:52.267856 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 18:48:52.267936 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 18:48:52.271009 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 18:48:52.271077 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 18:48:52.273149 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 18:48:52.274584 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 18:48:52.274639 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 18:48:52.277085 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 18:48:52.285546 systemd[1]: Switching root. Feb 13 18:48:52.306655 systemd-journald[239]: Journal stopped Feb 13 18:48:52.966263 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Feb 13 18:48:52.966320 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 18:48:52.966337 kernel: SELinux: policy capability open_perms=1 Feb 13 18:48:52.966347 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 18:48:52.966356 kernel: SELinux: policy capability always_check_network=0 Feb 13 18:48:52.966366 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 18:48:52.966375 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 18:48:52.966385 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 18:48:52.966395 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 18:48:52.966410 kernel: audit: type=1403 audit(1739472532.444:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 18:48:52.966422 systemd[1]: Successfully loaded SELinux policy in 34.829ms. Feb 13 18:48:52.966444 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.939ms. Feb 13 18:48:52.966458 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 18:48:52.966469 systemd[1]: Detected virtualization kvm. Feb 13 18:48:52.966480 systemd[1]: Detected architecture arm64. Feb 13 18:48:52.966491 systemd[1]: Detected first boot. Feb 13 18:48:52.966501 systemd[1]: Initializing machine ID from VM UUID. Feb 13 18:48:52.966512 zram_generator::config[1045]: No configuration found. Feb 13 18:48:52.966523 systemd[1]: Populated /etc with preset unit settings. Feb 13 18:48:52.966536 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 18:48:52.966546 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 18:48:52.966557 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 18:48:52.966571 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 18:48:52.966582 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 18:48:52.966593 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 18:48:52.966603 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 18:48:52.966613 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 18:48:52.966624 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 18:48:52.966639 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 18:48:52.966650 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 18:48:52.966661 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:48:52.966671 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:48:52.966682 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 18:48:52.966693 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 18:48:52.966704 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 18:48:52.966714 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 18:48:52.966725 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 18:48:52.966737 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:48:52.966748 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 18:48:52.966758 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 18:48:52.966769 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 18:48:52.966789 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 18:48:52.966807 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:48:52.966818 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 18:48:52.966832 systemd[1]: Reached target slices.target - Slice Units. Feb 13 18:48:52.966842 systemd[1]: Reached target swap.target - Swaps. Feb 13 18:48:52.966853 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 18:48:52.966868 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 18:48:52.966879 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:48:52.966889 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 18:48:52.966900 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:48:52.966911 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 18:48:52.966922 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 18:48:52.966932 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 18:48:52.966946 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 18:48:52.966957 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 18:48:52.966967 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 18:48:52.966978 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 18:48:52.966989 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 18:48:52.967000 systemd[1]: Reached target machines.target - Containers. Feb 13 18:48:52.967016 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 18:48:52.967028 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:48:52.967040 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 18:48:52.967051 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 18:48:52.967062 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:48:52.967073 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 18:48:52.967084 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:48:52.967094 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 18:48:52.967104 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:48:52.967115 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 18:48:52.967128 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 18:48:52.967138 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 18:48:52.967148 kernel: fuse: init (API version 7.39) Feb 13 18:48:52.967159 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 18:48:52.967169 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 18:48:52.967181 kernel: loop: module loaded Feb 13 18:48:52.967191 kernel: ACPI: bus type drm_connector registered Feb 13 18:48:52.967200 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 18:48:52.967211 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 18:48:52.967222 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 18:48:52.967234 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 18:48:52.967244 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 18:48:52.967273 systemd-journald[1111]: Collecting audit messages is disabled. Feb 13 18:48:52.967294 systemd-journald[1111]: Journal started Feb 13 18:48:52.967319 systemd-journald[1111]: Runtime Journal (/run/log/journal/c43d5f76d6f642d2b47ac003b86d8a7c) is 5.9M, max 47.3M, 41.4M free. Feb 13 18:48:52.798681 systemd[1]: Queued start job for default target multi-user.target. Feb 13 18:48:52.814245 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 18:48:52.814583 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 18:48:52.970152 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 18:48:52.970191 systemd[1]: Stopped verity-setup.service. Feb 13 18:48:52.972900 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 18:48:52.973457 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 18:48:52.974349 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 18:48:52.975274 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 18:48:52.976102 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 18:48:52.976977 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 18:48:52.977866 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 18:48:52.979856 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 18:48:52.980927 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:48:52.982051 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 18:48:52.982197 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 18:48:52.983295 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:48:52.983436 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:48:52.984497 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 18:48:52.984628 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 18:48:52.985641 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:48:52.985776 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:48:52.986897 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 18:48:52.987036 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 18:48:52.988155 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:48:52.988278 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:48:52.989436 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 18:48:52.992105 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 18:48:52.993394 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 18:48:53.004907 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 18:48:53.020888 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 18:48:53.022668 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 18:48:53.023521 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 18:48:53.023556 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 18:48:53.025141 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 18:48:53.026959 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 18:48:53.028729 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 18:48:53.029627 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:48:53.031285 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 18:48:53.033902 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 18:48:53.034726 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 18:48:53.035937 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 18:48:53.039610 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 18:48:53.040912 systemd-journald[1111]: Time spent on flushing to /var/log/journal/c43d5f76d6f642d2b47ac003b86d8a7c is 25.776ms for 854 entries. Feb 13 18:48:53.040912 systemd-journald[1111]: System Journal (/var/log/journal/c43d5f76d6f642d2b47ac003b86d8a7c) is 8.0M, max 195.6M, 187.6M free. Feb 13 18:48:53.083564 systemd-journald[1111]: Received client request to flush runtime journal. Feb 13 18:48:53.083615 kernel: loop0: detected capacity change from 0 to 116784 Feb 13 18:48:53.040641 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 18:48:53.043722 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 18:48:53.049805 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 18:48:53.051884 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:48:53.054030 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 18:48:53.055010 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 18:48:53.056058 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 18:48:53.057156 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 18:48:53.061194 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 18:48:53.077418 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 18:48:53.083435 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 18:48:53.086018 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:48:53.088545 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 18:48:53.092885 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 18:48:53.103111 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 18:48:53.104723 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 18:48:53.105381 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 18:48:53.114993 kernel: loop1: detected capacity change from 0 to 189592 Feb 13 18:48:53.117070 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 18:48:53.121717 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 18:48:53.135435 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 18:48:53.135454 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 18:48:53.139861 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:48:53.146801 kernel: loop2: detected capacity change from 0 to 113552 Feb 13 18:48:53.181834 kernel: loop3: detected capacity change from 0 to 116784 Feb 13 18:48:53.186836 kernel: loop4: detected capacity change from 0 to 189592 Feb 13 18:48:53.196802 kernel: loop5: detected capacity change from 0 to 113552 Feb 13 18:48:53.200233 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 18:48:53.200600 (sd-merge)[1183]: Merged extensions into '/usr'. Feb 13 18:48:53.203422 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 18:48:53.203435 systemd[1]: Reloading... Feb 13 18:48:53.262822 zram_generator::config[1208]: No configuration found. Feb 13 18:48:53.301537 ldconfig[1151]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 18:48:53.354963 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:48:53.390393 systemd[1]: Reloading finished in 186 ms. Feb 13 18:48:53.420020 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 18:48:53.421227 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 18:48:53.433992 systemd[1]: Starting ensure-sysext.service... Feb 13 18:48:53.436251 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 18:48:53.444481 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Feb 13 18:48:53.444495 systemd[1]: Reloading... Feb 13 18:48:53.461698 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 18:48:53.461963 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 18:48:53.462576 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 18:48:53.462854 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Feb 13 18:48:53.462899 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Feb 13 18:48:53.465579 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 18:48:53.465591 systemd-tmpfiles[1244]: Skipping /boot Feb 13 18:48:53.473765 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 18:48:53.473808 systemd-tmpfiles[1244]: Skipping /boot Feb 13 18:48:53.500965 zram_generator::config[1270]: No configuration found. Feb 13 18:48:53.576296 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:48:53.612202 systemd[1]: Reloading finished in 167 ms. Feb 13 18:48:53.627563 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 18:48:53.645260 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:48:53.652280 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 18:48:53.654582 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 18:48:53.657189 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 18:48:53.660047 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 18:48:53.665995 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:48:53.675605 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 18:48:53.678926 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:48:53.680364 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:48:53.682334 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:48:53.685356 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:48:53.686266 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:48:53.688521 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 18:48:53.691368 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:48:53.692988 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:48:53.694585 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:48:53.694697 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:48:53.702124 systemd-udevd[1312]: Using default interface naming scheme 'v255'. Feb 13 18:48:53.702153 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 18:48:53.704067 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:48:53.705358 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:48:53.708059 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:48:53.714059 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:48:53.718565 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:48:53.719574 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:48:53.722137 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 18:48:53.724080 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:48:53.724237 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:48:53.726026 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 18:48:53.727519 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:48:53.727649 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:48:53.731856 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 18:48:53.736368 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 18:48:53.737844 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 18:48:53.739102 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:48:53.744651 systemd[1]: Finished ensure-sysext.service. Feb 13 18:48:53.746721 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:48:53.752765 augenrules[1372]: No rules Feb 13 18:48:53.761976 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:48:53.765498 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 18:48:53.767944 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:48:53.773066 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:48:53.774860 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:48:53.779415 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 18:48:53.783154 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 18:48:53.784862 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 18:48:53.785589 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 18:48:53.785887 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 18:48:53.789807 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1370) Feb 13 18:48:53.790145 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:48:53.790273 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:48:53.791578 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 18:48:53.791749 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 18:48:53.795202 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:48:53.795479 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:48:53.798282 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:48:53.798410 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:48:53.807358 systemd-resolved[1311]: Positive Trust Anchors: Feb 13 18:48:53.807377 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 18:48:53.807409 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 18:48:53.814094 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 18:48:53.821876 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 18:48:53.821938 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 18:48:53.823908 systemd-resolved[1311]: Defaulting to hostname 'linux'. Feb 13 18:48:53.829452 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 18:48:53.832170 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:48:53.883039 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:48:53.885129 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 18:48:53.892434 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 18:48:53.903129 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 18:48:53.904199 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 18:48:53.907200 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 18:48:53.916078 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 18:48:53.917401 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 18:48:53.920113 systemd-networkd[1382]: lo: Link UP Feb 13 18:48:53.920120 systemd-networkd[1382]: lo: Gained carrier Feb 13 18:48:53.922717 systemd-networkd[1382]: Enumeration completed Feb 13 18:48:53.923266 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:48:53.923274 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 18:48:53.923387 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 18:48:53.924634 systemd[1]: Reached target network.target - Network. Feb 13 18:48:53.924636 systemd-networkd[1382]: eth0: Link UP Feb 13 18:48:53.924647 systemd-networkd[1382]: eth0: Gained carrier Feb 13 18:48:53.924661 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:48:53.927033 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 18:48:53.937959 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 18:48:53.944857 systemd-networkd[1382]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 18:48:53.945752 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Feb 13 18:48:53.946347 systemd-timesyncd[1384]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 18:48:53.946402 systemd-timesyncd[1384]: Initial clock synchronization to Thu 2025-02-13 18:48:54.170972 UTC. Feb 13 18:48:53.961831 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:48:53.981088 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 18:48:53.982190 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:48:53.983034 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 18:48:53.983865 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 18:48:53.984713 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 18:48:53.985816 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 18:48:53.986654 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 18:48:53.987599 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 18:48:53.988491 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 18:48:53.988523 systemd[1]: Reached target paths.target - Path Units. Feb 13 18:48:53.989172 systemd[1]: Reached target timers.target - Timer Units. Feb 13 18:48:53.990701 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 18:48:53.992995 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 18:48:54.005999 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 18:48:54.008062 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 18:48:54.009327 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 18:48:54.010242 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 18:48:54.010956 systemd[1]: Reached target basic.target - Basic System. Feb 13 18:48:54.011677 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 18:48:54.011710 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 18:48:54.012754 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 18:48:54.014667 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 18:48:54.016264 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 18:48:54.016937 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 18:48:54.021045 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 18:48:54.021796 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 18:48:54.024028 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 18:48:54.027984 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 18:48:54.031028 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 18:48:54.033934 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 18:48:54.036405 jq[1417]: false Feb 13 18:48:54.041069 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 18:48:54.041721 extend-filesystems[1418]: Found loop3 Feb 13 18:48:54.041721 extend-filesystems[1418]: Found loop4 Feb 13 18:48:54.041721 extend-filesystems[1418]: Found loop5 Feb 13 18:48:54.041721 extend-filesystems[1418]: Found vda Feb 13 18:48:54.041721 extend-filesystems[1418]: Found vda1 Feb 13 18:48:54.041721 extend-filesystems[1418]: Found vda2 Feb 13 18:48:54.041721 extend-filesystems[1418]: Found vda3 Feb 13 18:48:54.041721 extend-filesystems[1418]: Found usr Feb 13 18:48:54.041721 extend-filesystems[1418]: Found vda4 Feb 13 18:48:54.041721 extend-filesystems[1418]: Found vda6 Feb 13 18:48:54.041721 extend-filesystems[1418]: Found vda7 Feb 13 18:48:54.041721 extend-filesystems[1418]: Found vda9 Feb 13 18:48:54.041721 extend-filesystems[1418]: Checking size of /dev/vda9 Feb 13 18:48:54.047202 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 18:48:54.047750 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 18:48:54.051026 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 18:48:54.052870 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 18:48:54.055854 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 18:48:54.056905 dbus-daemon[1416]: [system] SELinux support is enabled Feb 13 18:48:54.057303 extend-filesystems[1418]: Resized partition /dev/vda9 Feb 13 18:48:54.058139 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 18:48:54.061611 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 18:48:54.062969 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 18:48:54.063338 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 18:48:54.063564 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 18:48:54.065155 jq[1434]: true Feb 13 18:48:54.068689 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 18:48:54.070142 extend-filesystems[1439]: resize2fs 1.47.1 (20-May-2024) Feb 13 18:48:54.069258 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 18:48:54.084303 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1362) Feb 13 18:48:54.087705 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 18:48:54.095850 tar[1441]: linux-arm64/helm Feb 13 18:48:54.097368 systemd-logind[1426]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 18:48:54.097577 systemd-logind[1426]: New seat seat0. Feb 13 18:48:54.110608 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 18:48:54.117178 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 18:48:54.117359 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 18:48:54.121061 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 18:48:54.121190 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 18:48:54.128073 (ntainerd)[1444]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 18:48:54.131823 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 18:48:54.137441 jq[1442]: true Feb 13 18:48:54.138583 update_engine[1432]: I20250213 18:48:54.138413 1432 main.cc:92] Flatcar Update Engine starting Feb 13 18:48:54.142602 extend-filesystems[1439]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 18:48:54.142602 extend-filesystems[1439]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 18:48:54.142602 extend-filesystems[1439]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 18:48:54.145406 extend-filesystems[1418]: Resized filesystem in /dev/vda9 Feb 13 18:48:54.146229 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 18:48:54.146611 update_engine[1432]: I20250213 18:48:54.146167 1432 update_check_scheduler.cc:74] Next update check in 9m51s Feb 13 18:48:54.146443 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 18:48:54.148065 systemd[1]: Started update-engine.service - Update Engine. Feb 13 18:48:54.159090 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 18:48:54.208123 locksmithd[1458]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 18:48:54.217903 bash[1473]: Updated "/home/core/.ssh/authorized_keys" Feb 13 18:48:54.219851 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 18:48:54.222433 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 18:48:54.330215 containerd[1444]: time="2025-02-13T18:48:54.330126539Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 18:48:54.360198 containerd[1444]: time="2025-02-13T18:48:54.360160441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:48:54.361912 containerd[1444]: time="2025-02-13T18:48:54.361879365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:48:54.362020 containerd[1444]: time="2025-02-13T18:48:54.362001708Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 18:48:54.362076 containerd[1444]: time="2025-02-13T18:48:54.362063352Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 18:48:54.362651 containerd[1444]: time="2025-02-13T18:48:54.362239361Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 18:48:54.362651 containerd[1444]: time="2025-02-13T18:48:54.362263500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 18:48:54.362651 containerd[1444]: time="2025-02-13T18:48:54.362317167Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:48:54.362651 containerd[1444]: time="2025-02-13T18:48:54.362328558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:48:54.362651 containerd[1444]: time="2025-02-13T18:48:54.362479893Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:48:54.362651 containerd[1444]: time="2025-02-13T18:48:54.362493669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 18:48:54.362651 containerd[1444]: time="2025-02-13T18:48:54.362505512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:48:54.362651 containerd[1444]: time="2025-02-13T18:48:54.362514107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 18:48:54.362651 containerd[1444]: time="2025-02-13T18:48:54.362598370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:48:54.362893 containerd[1444]: time="2025-02-13T18:48:54.362829114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:48:54.363037 containerd[1444]: time="2025-02-13T18:48:54.362930319Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:48:54.363037 containerd[1444]: time="2025-02-13T18:48:54.362955487Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 18:48:54.363135 containerd[1444]: time="2025-02-13T18:48:54.363039913Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 18:48:54.363135 containerd[1444]: time="2025-02-13T18:48:54.363080379Z" level=info msg="metadata content store policy set" policy=shared Feb 13 18:48:54.368622 containerd[1444]: time="2025-02-13T18:48:54.368591966Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 18:48:54.368687 containerd[1444]: time="2025-02-13T18:48:54.368644152Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 18:48:54.368687 containerd[1444]: time="2025-02-13T18:48:54.368659162Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 18:48:54.368687 containerd[1444]: time="2025-02-13T18:48:54.368674706Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 18:48:54.368757 containerd[1444]: time="2025-02-13T18:48:54.368691032Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 18:48:54.368878 containerd[1444]: time="2025-02-13T18:48:54.368851990Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 18:48:54.369180 containerd[1444]: time="2025-02-13T18:48:54.369160664Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 18:48:54.369293 containerd[1444]: time="2025-02-13T18:48:54.369273836Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 18:48:54.369335 containerd[1444]: time="2025-02-13T18:48:54.369296413Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 18:48:54.369335 containerd[1444]: time="2025-02-13T18:48:54.369310929Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 18:48:54.369335 containerd[1444]: time="2025-02-13T18:48:54.369324130Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 18:48:54.369399 containerd[1444]: time="2025-02-13T18:48:54.369337865Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 18:48:54.369399 containerd[1444]: time="2025-02-13T18:48:54.369350490Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 18:48:54.369399 containerd[1444]: time="2025-02-13T18:48:54.369363156Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 18:48:54.369399 containerd[1444]: time="2025-02-13T18:48:54.369378043Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 18:48:54.369399 containerd[1444]: time="2025-02-13T18:48:54.369390462Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 18:48:54.369505 containerd[1444]: time="2025-02-13T18:48:54.369403005Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 18:48:54.369505 containerd[1444]: time="2025-02-13T18:48:54.369414561Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 18:48:54.369505 containerd[1444]: time="2025-02-13T18:48:54.369433683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 18:48:54.369505 containerd[1444]: time="2025-02-13T18:48:54.369447583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 18:48:54.369505 containerd[1444]: time="2025-02-13T18:48:54.369460043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 18:48:54.369505 containerd[1444]: time="2025-02-13T18:48:54.369472051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 18:48:54.369505 containerd[1444]: time="2025-02-13T18:48:54.369483772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 18:48:54.369505 containerd[1444]: time="2025-02-13T18:48:54.369496438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 18:48:54.369505 containerd[1444]: time="2025-02-13T18:48:54.369508240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 18:48:54.369661 containerd[1444]: time="2025-02-13T18:48:54.369521811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 18:48:54.369661 containerd[1444]: time="2025-02-13T18:48:54.369534107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 18:48:54.369661 containerd[1444]: time="2025-02-13T18:48:54.369552818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 18:48:54.369661 containerd[1444]: time="2025-02-13T18:48:54.369565032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 18:48:54.369661 containerd[1444]: time="2025-02-13T18:48:54.369577040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 18:48:54.369661 containerd[1444]: time="2025-02-13T18:48:54.369589953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 18:48:54.369661 containerd[1444]: time="2025-02-13T18:48:54.369603935Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 18:48:54.369661 containerd[1444]: time="2025-02-13T18:48:54.369623674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 18:48:54.369661 containerd[1444]: time="2025-02-13T18:48:54.369635929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 18:48:54.369661 containerd[1444]: time="2025-02-13T18:48:54.369646456Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 18:48:54.369848 containerd[1444]: time="2025-02-13T18:48:54.369838010Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 18:48:54.369869 containerd[1444]: time="2025-02-13T18:48:54.369857050Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 18:48:54.369919 containerd[1444]: time="2025-02-13T18:48:54.369867866Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 18:48:54.369919 containerd[1444]: time="2025-02-13T18:48:54.369888427Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 18:48:54.369919 containerd[1444]: time="2025-02-13T18:48:54.369897845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 18:48:54.369919 containerd[1444]: time="2025-02-13T18:48:54.369909647Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 18:48:54.369919 containerd[1444]: time="2025-02-13T18:48:54.369919599Z" level=info msg="NRI interface is disabled by configuration." Feb 13 18:48:54.370005 containerd[1444]: time="2025-02-13T18:48:54.369935349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 18:48:54.370412 containerd[1444]: time="2025-02-13T18:48:54.370358511Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 18:48:54.370542 containerd[1444]: time="2025-02-13T18:48:54.370416578Z" level=info msg="Connect containerd service" Feb 13 18:48:54.370542 containerd[1444]: time="2025-02-13T18:48:54.370452478Z" level=info msg="using legacy CRI server" Feb 13 18:48:54.370542 containerd[1444]: time="2025-02-13T18:48:54.370460374Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 18:48:54.370875 containerd[1444]: time="2025-02-13T18:48:54.370696917Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 18:48:54.371419 containerd[1444]: time="2025-02-13T18:48:54.371389643Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 18:48:54.372370 containerd[1444]: time="2025-02-13T18:48:54.371906279Z" level=info msg="Start subscribing containerd event" Feb 13 18:48:54.372370 containerd[1444]: time="2025-02-13T18:48:54.372275075Z" level=info msg="Start recovering state" Feb 13 18:48:54.372370 containerd[1444]: time="2025-02-13T18:48:54.371912900Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 18:48:54.372465 containerd[1444]: time="2025-02-13T18:48:54.372405190Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 18:48:54.375119 containerd[1444]: time="2025-02-13T18:48:54.373398283Z" level=info msg="Start event monitor" Feb 13 18:48:54.375119 containerd[1444]: time="2025-02-13T18:48:54.373433732Z" level=info msg="Start snapshots syncer" Feb 13 18:48:54.375119 containerd[1444]: time="2025-02-13T18:48:54.373451662Z" level=info msg="Start cni network conf syncer for default" Feb 13 18:48:54.375119 containerd[1444]: time="2025-02-13T18:48:54.373459187Z" level=info msg="Start streaming server" Feb 13 18:48:54.375119 containerd[1444]: time="2025-02-13T18:48:54.375047709Z" level=info msg="containerd successfully booted in 0.046056s" Feb 13 18:48:54.373670 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 18:48:54.462470 tar[1441]: linux-arm64/LICENSE Feb 13 18:48:54.463891 tar[1441]: linux-arm64/README.md Feb 13 18:48:54.476113 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 18:48:54.986506 systemd-networkd[1382]: eth0: Gained IPv6LL Feb 13 18:48:54.992925 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 18:48:54.994511 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 18:48:55.003504 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 18:48:55.006377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:48:55.008574 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 18:48:55.040845 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 18:48:55.042160 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 18:48:55.042307 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 18:48:55.044053 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 18:48:55.105695 sshd_keygen[1440]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 18:48:55.125526 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 18:48:55.137080 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 18:48:55.142194 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 18:48:55.142362 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 18:48:55.147255 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 18:48:55.160483 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 18:48:55.174118 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 18:48:55.176126 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 18:48:55.177188 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 18:48:55.569776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:48:55.570991 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 18:48:55.573485 (kubelet)[1530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 18:48:55.575884 systemd[1]: Startup finished in 515ms (kernel) + 4.744s (initrd) + 3.169s (userspace) = 8.429s. Feb 13 18:48:55.585257 agetty[1523]: failed to open credentials directory Feb 13 18:48:55.585300 agetty[1524]: failed to open credentials directory Feb 13 18:48:55.997413 kubelet[1530]: E0213 18:48:55.997354 1530 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 18:48:55.999924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 18:48:56.000086 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 18:49:00.578525 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 18:49:00.579668 systemd[1]: Started sshd@0-10.0.0.27:22-10.0.0.1:52306.service - OpenSSH per-connection server daemon (10.0.0.1:52306). Feb 13 18:49:00.646088 sshd[1543]: Accepted publickey for core from 10.0.0.1 port 52306 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:49:00.647699 sshd-session[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:49:00.656688 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 18:49:00.666127 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 18:49:00.667565 systemd-logind[1426]: New session 1 of user core. Feb 13 18:49:00.674631 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 18:49:00.677871 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 18:49:00.683216 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 18:49:00.753482 systemd[1547]: Queued start job for default target default.target. Feb 13 18:49:00.762650 systemd[1547]: Created slice app.slice - User Application Slice. Feb 13 18:49:00.762693 systemd[1547]: Reached target paths.target - Paths. Feb 13 18:49:00.762705 systemd[1547]: Reached target timers.target - Timers. Feb 13 18:49:00.763957 systemd[1547]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 18:49:00.776053 systemd[1547]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 18:49:00.776157 systemd[1547]: Reached target sockets.target - Sockets. Feb 13 18:49:00.776169 systemd[1547]: Reached target basic.target - Basic System. Feb 13 18:49:00.776202 systemd[1547]: Reached target default.target - Main User Target. Feb 13 18:49:00.776226 systemd[1547]: Startup finished in 88ms. Feb 13 18:49:00.776447 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 18:49:00.777754 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 18:49:00.840945 systemd[1]: Started sshd@1-10.0.0.27:22-10.0.0.1:52314.service - OpenSSH per-connection server daemon (10.0.0.1:52314). Feb 13 18:49:00.883218 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 52314 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:49:00.884385 sshd-session[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:49:00.888370 systemd-logind[1426]: New session 2 of user core. Feb 13 18:49:00.898997 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 18:49:00.949844 sshd[1560]: Connection closed by 10.0.0.1 port 52314 Feb 13 18:49:00.950236 sshd-session[1558]: pam_unix(sshd:session): session closed for user core Feb 13 18:49:00.965062 systemd[1]: sshd@1-10.0.0.27:22-10.0.0.1:52314.service: Deactivated successfully. Feb 13 18:49:00.966993 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 18:49:00.969162 systemd-logind[1426]: Session 2 logged out. Waiting for processes to exit. Feb 13 18:49:00.982352 systemd[1]: Started sshd@2-10.0.0.27:22-10.0.0.1:52328.service - OpenSSH per-connection server daemon (10.0.0.1:52328). Feb 13 18:49:00.983632 systemd-logind[1426]: Removed session 2. Feb 13 18:49:01.017471 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 52328 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:49:01.018529 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:49:01.021883 systemd-logind[1426]: New session 3 of user core. Feb 13 18:49:01.035004 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 18:49:01.083898 sshd[1567]: Connection closed by 10.0.0.1 port 52328 Feb 13 18:49:01.084198 sshd-session[1565]: pam_unix(sshd:session): session closed for user core Feb 13 18:49:01.097871 systemd[1]: sshd@2-10.0.0.27:22-10.0.0.1:52328.service: Deactivated successfully. Feb 13 18:49:01.099555 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 18:49:01.102877 systemd-logind[1426]: Session 3 logged out. Waiting for processes to exit. Feb 13 18:49:01.103199 systemd[1]: Started sshd@3-10.0.0.27:22-10.0.0.1:52344.service - OpenSSH per-connection server daemon (10.0.0.1:52344). Feb 13 18:49:01.104323 systemd-logind[1426]: Removed session 3. Feb 13 18:49:01.144328 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 52344 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:49:01.145420 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:49:01.149942 systemd-logind[1426]: New session 4 of user core. Feb 13 18:49:01.160946 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 18:49:01.212006 sshd[1574]: Connection closed by 10.0.0.1 port 52344 Feb 13 18:49:01.212380 sshd-session[1572]: pam_unix(sshd:session): session closed for user core Feb 13 18:49:01.225134 systemd[1]: sshd@3-10.0.0.27:22-10.0.0.1:52344.service: Deactivated successfully. Feb 13 18:49:01.227092 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 18:49:01.229463 systemd-logind[1426]: Session 4 logged out. Waiting for processes to exit. Feb 13 18:49:01.229878 systemd[1]: Started sshd@4-10.0.0.27:22-10.0.0.1:52360.service - OpenSSH per-connection server daemon (10.0.0.1:52360). Feb 13 18:49:01.231445 systemd-logind[1426]: Removed session 4. Feb 13 18:49:01.269004 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 52360 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:49:01.270073 sshd-session[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:49:01.273546 systemd-logind[1426]: New session 5 of user core. Feb 13 18:49:01.282964 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 18:49:01.365806 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 18:49:01.369711 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 18:49:01.738043 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 18:49:01.738184 (dockerd)[1602]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 18:49:01.993864 dockerd[1602]: time="2025-02-13T18:49:01.993488017Z" level=info msg="Starting up" Feb 13 18:49:02.135271 dockerd[1602]: time="2025-02-13T18:49:02.135220934Z" level=info msg="Loading containers: start." Feb 13 18:49:02.280812 kernel: Initializing XFRM netlink socket Feb 13 18:49:02.346728 systemd-networkd[1382]: docker0: Link UP Feb 13 18:49:02.375015 dockerd[1602]: time="2025-02-13T18:49:02.374977763Z" level=info msg="Loading containers: done." Feb 13 18:49:02.386369 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3873293093-merged.mount: Deactivated successfully. Feb 13 18:49:02.387015 dockerd[1602]: time="2025-02-13T18:49:02.386967297Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 18:49:02.387097 dockerd[1602]: time="2025-02-13T18:49:02.387061800Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 18:49:02.387253 dockerd[1602]: time="2025-02-13T18:49:02.387229564Z" level=info msg="Daemon has completed initialization" Feb 13 18:49:02.418783 dockerd[1602]: time="2025-02-13T18:49:02.418714180Z" level=info msg="API listen on /run/docker.sock" Feb 13 18:49:02.419016 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 18:49:03.079283 containerd[1444]: time="2025-02-13T18:49:03.079232968Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 18:49:03.773719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3162039431.mount: Deactivated successfully. Feb 13 18:49:05.718468 containerd[1444]: time="2025-02-13T18:49:05.718418989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:05.719396 containerd[1444]: time="2025-02-13T18:49:05.719221786Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620377" Feb 13 18:49:05.720194 containerd[1444]: time="2025-02-13T18:49:05.720156753Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:05.723137 containerd[1444]: time="2025-02-13T18:49:05.723103645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:05.724381 containerd[1444]: time="2025-02-13T18:49:05.724353152Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 2.645079823s" Feb 13 18:49:05.724434 containerd[1444]: time="2025-02-13T18:49:05.724389586Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 18:49:05.725038 containerd[1444]: time="2025-02-13T18:49:05.724971202Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 18:49:06.100031 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 18:49:06.109068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:49:06.198823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:49:06.202661 (kubelet)[1861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 18:49:06.287746 kubelet[1861]: E0213 18:49:06.287572 1861 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 18:49:06.291448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 18:49:06.291595 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 18:49:07.783334 containerd[1444]: time="2025-02-13T18:49:07.783285316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:07.785000 containerd[1444]: time="2025-02-13T18:49:07.784953493Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471775" Feb 13 18:49:07.785923 containerd[1444]: time="2025-02-13T18:49:07.785867435Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:07.788191 containerd[1444]: time="2025-02-13T18:49:07.788161335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:07.790183 containerd[1444]: time="2025-02-13T18:49:07.789679414Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 2.064678632s" Feb 13 18:49:07.790183 containerd[1444]: time="2025-02-13T18:49:07.789711411Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 18:49:07.790621 containerd[1444]: time="2025-02-13T18:49:07.790588411Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 18:49:09.516829 containerd[1444]: time="2025-02-13T18:49:09.516771039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:09.517735 containerd[1444]: time="2025-02-13T18:49:09.517517698Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024542" Feb 13 18:49:09.518426 containerd[1444]: time="2025-02-13T18:49:09.518396617Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:09.521566 containerd[1444]: time="2025-02-13T18:49:09.521536513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:09.522582 containerd[1444]: time="2025-02-13T18:49:09.522528378Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.731844385s" Feb 13 18:49:09.522582 containerd[1444]: time="2025-02-13T18:49:09.522562587Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 18:49:09.523373 containerd[1444]: time="2025-02-13T18:49:09.523285517Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 18:49:10.893858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount896084126.mount: Deactivated successfully. Feb 13 18:49:11.162921 containerd[1444]: time="2025-02-13T18:49:11.162801753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:11.164105 containerd[1444]: time="2025-02-13T18:49:11.164003751Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769258" Feb 13 18:49:11.164863 containerd[1444]: time="2025-02-13T18:49:11.164829701Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:11.166625 containerd[1444]: time="2025-02-13T18:49:11.166572423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:11.167422 containerd[1444]: time="2025-02-13T18:49:11.167341488Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.644022654s" Feb 13 18:49:11.167422 containerd[1444]: time="2025-02-13T18:49:11.167373581Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 18:49:11.167955 containerd[1444]: time="2025-02-13T18:49:11.167919641Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 18:49:12.147293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4165042034.mount: Deactivated successfully. Feb 13 18:49:13.206533 containerd[1444]: time="2025-02-13T18:49:13.206461368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:13.210372 containerd[1444]: time="2025-02-13T18:49:13.210332229Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 18:49:13.214805 containerd[1444]: time="2025-02-13T18:49:13.214766659Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:13.218295 containerd[1444]: time="2025-02-13T18:49:13.218260404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:13.219375 containerd[1444]: time="2025-02-13T18:49:13.219335588Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.051382898s" Feb 13 18:49:13.219407 containerd[1444]: time="2025-02-13T18:49:13.219374835Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 18:49:13.220011 containerd[1444]: time="2025-02-13T18:49:13.219791999Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 18:49:13.843890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2378470901.mount: Deactivated successfully. Feb 13 18:49:13.847549 containerd[1444]: time="2025-02-13T18:49:13.847505567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:13.848446 containerd[1444]: time="2025-02-13T18:49:13.848400190Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 18:49:13.849228 containerd[1444]: time="2025-02-13T18:49:13.849199723Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:13.851694 containerd[1444]: time="2025-02-13T18:49:13.851663464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:13.853169 containerd[1444]: time="2025-02-13T18:49:13.853137973Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 633.315706ms" Feb 13 18:49:13.853209 containerd[1444]: time="2025-02-13T18:49:13.853171728Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 18:49:13.853669 containerd[1444]: time="2025-02-13T18:49:13.853638763Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 18:49:14.485089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount127761797.mount: Deactivated successfully. Feb 13 18:49:16.349818 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 18:49:16.358970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:49:16.459122 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:49:16.462993 (kubelet)[1989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 18:49:16.498035 kubelet[1989]: E0213 18:49:16.497949 1989 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 18:49:16.500509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 18:49:16.500645 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 18:49:17.398689 containerd[1444]: time="2025-02-13T18:49:17.398628661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:17.400426 containerd[1444]: time="2025-02-13T18:49:17.400381140Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Feb 13 18:49:17.401566 containerd[1444]: time="2025-02-13T18:49:17.401518980Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:17.404272 containerd[1444]: time="2025-02-13T18:49:17.404237316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:17.406640 containerd[1444]: time="2025-02-13T18:49:17.406091167Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.552424147s" Feb 13 18:49:17.406640 containerd[1444]: time="2025-02-13T18:49:17.406123008Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 18:49:22.711860 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:49:22.721994 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:49:22.744157 systemd[1]: Reloading requested from client PID 2034 ('systemctl') (unit session-5.scope)... Feb 13 18:49:22.744173 systemd[1]: Reloading... Feb 13 18:49:22.806821 zram_generator::config[2071]: No configuration found. Feb 13 18:49:22.911493 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:49:22.963028 systemd[1]: Reloading finished in 218 ms. Feb 13 18:49:23.004644 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 18:49:23.004902 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 18:49:23.005910 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:49:23.009396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:49:23.098844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:49:23.102577 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 18:49:23.139085 kubelet[2119]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 18:49:23.139085 kubelet[2119]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 18:49:23.139085 kubelet[2119]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 18:49:23.139432 kubelet[2119]: I0213 18:49:23.139282 2119 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 18:49:23.591478 kubelet[2119]: I0213 18:49:23.591429 2119 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 18:49:23.591478 kubelet[2119]: I0213 18:49:23.591468 2119 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 18:49:23.591743 kubelet[2119]: I0213 18:49:23.591712 2119 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 18:49:23.635900 kubelet[2119]: E0213 18:49:23.635842 2119 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:49:23.636955 kubelet[2119]: I0213 18:49:23.636933 2119 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 18:49:23.646508 kubelet[2119]: E0213 18:49:23.646294 2119 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 18:49:23.646508 kubelet[2119]: I0213 18:49:23.646333 2119 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 18:49:23.649881 kubelet[2119]: I0213 18:49:23.649854 2119 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 18:49:23.650596 kubelet[2119]: I0213 18:49:23.650554 2119 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 18:49:23.650766 kubelet[2119]: I0213 18:49:23.650705 2119 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 18:49:23.650958 kubelet[2119]: I0213 18:49:23.650743 2119 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 18:49:23.651100 kubelet[2119]: I0213 18:49:23.651079 2119 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 18:49:23.651100 kubelet[2119]: I0213 18:49:23.651091 2119 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 18:49:23.651300 kubelet[2119]: I0213 18:49:23.651278 2119 state_mem.go:36] "Initialized new in-memory state store" Feb 13 18:49:23.653517 kubelet[2119]: I0213 18:49:23.653183 2119 kubelet.go:408] "Attempting to sync node with API server" Feb 13 18:49:23.653517 kubelet[2119]: I0213 18:49:23.653213 2119 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 18:49:23.653517 kubelet[2119]: I0213 18:49:23.653300 2119 kubelet.go:314] "Adding apiserver pod source" Feb 13 18:49:23.653517 kubelet[2119]: I0213 18:49:23.653310 2119 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 18:49:23.656039 kubelet[2119]: W0213 18:49:23.655981 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Feb 13 18:49:23.656283 kubelet[2119]: E0213 18:49:23.656039 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:49:23.656430 kubelet[2119]: I0213 18:49:23.656410 2119 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 18:49:23.657210 kubelet[2119]: W0213 18:49:23.657166 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Feb 13 18:49:23.657265 kubelet[2119]: E0213 18:49:23.657224 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:49:23.659245 kubelet[2119]: I0213 18:49:23.659223 2119 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 18:49:23.661835 kubelet[2119]: W0213 18:49:23.661816 2119 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 18:49:23.662738 kubelet[2119]: I0213 18:49:23.662717 2119 server.go:1269] "Started kubelet" Feb 13 18:49:23.663317 kubelet[2119]: I0213 18:49:23.663258 2119 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 18:49:23.664057 kubelet[2119]: I0213 18:49:23.663530 2119 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 18:49:23.664057 kubelet[2119]: I0213 18:49:23.663808 2119 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 18:49:23.664885 kubelet[2119]: I0213 18:49:23.664864 2119 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 18:49:23.665130 kubelet[2119]: I0213 18:49:23.665100 2119 server.go:460] "Adding debug handlers to kubelet server" Feb 13 18:49:23.669341 kubelet[2119]: I0213 18:49:23.667389 2119 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 18:49:23.669341 kubelet[2119]: I0213 18:49:23.668072 2119 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 18:49:23.669341 kubelet[2119]: I0213 18:49:23.668194 2119 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 18:49:23.669341 kubelet[2119]: I0213 18:49:23.668259 2119 reconciler.go:26] "Reconciler: start to sync state" Feb 13 18:49:23.669341 kubelet[2119]: W0213 18:49:23.668547 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Feb 13 18:49:23.669341 kubelet[2119]: E0213 18:49:23.668585 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:49:23.669341 kubelet[2119]: E0213 18:49:23.668761 2119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="200ms" Feb 13 18:49:23.671382 kubelet[2119]: E0213 18:49:23.670305 2119 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.27:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.27:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823d9149da72b41 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 18:49:23.662695233 +0000 UTC m=+0.557279093,LastTimestamp:2025-02-13 18:49:23.662695233 +0000 UTC m=+0.557279093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 18:49:23.671744 kubelet[2119]: E0213 18:49:23.671720 2119 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:49:23.672413 kubelet[2119]: I0213 18:49:23.672210 2119 factory.go:221] Registration of the systemd container factory successfully Feb 13 18:49:23.672413 kubelet[2119]: I0213 18:49:23.672346 2119 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 18:49:23.673570 kubelet[2119]: E0213 18:49:23.673503 2119 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 18:49:23.673962 kubelet[2119]: I0213 18:49:23.673942 2119 factory.go:221] Registration of the containerd container factory successfully Feb 13 18:49:23.681821 kubelet[2119]: I0213 18:49:23.681687 2119 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 18:49:23.682775 kubelet[2119]: I0213 18:49:23.682741 2119 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 18:49:23.682903 kubelet[2119]: I0213 18:49:23.682880 2119 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 18:49:23.682903 kubelet[2119]: I0213 18:49:23.682903 2119 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 18:49:23.682957 kubelet[2119]: E0213 18:49:23.682946 2119 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 18:49:23.687886 kubelet[2119]: W0213 18:49:23.687760 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Feb 13 18:49:23.688018 kubelet[2119]: E0213 18:49:23.687890 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:49:23.688018 kubelet[2119]: I0213 18:49:23.687728 2119 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 18:49:23.688018 kubelet[2119]: I0213 18:49:23.687920 2119 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 18:49:23.688018 kubelet[2119]: I0213 18:49:23.687935 2119 state_mem.go:36] "Initialized new in-memory state store" Feb 13 18:49:23.747843 kubelet[2119]: I0213 18:49:23.747813 2119 policy_none.go:49] "None policy: Start" Feb 13 18:49:23.748662 kubelet[2119]: I0213 18:49:23.748641 2119 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 18:49:23.748883 kubelet[2119]: I0213 18:49:23.748755 2119 state_mem.go:35] "Initializing new in-memory state store" Feb 13 18:49:23.755816 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 18:49:23.766676 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 18:49:23.769700 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 18:49:23.772605 kubelet[2119]: E0213 18:49:23.772576 2119 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:49:23.783846 kubelet[2119]: E0213 18:49:23.783805 2119 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 18:49:23.786159 kubelet[2119]: I0213 18:49:23.785698 2119 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 18:49:23.786159 kubelet[2119]: I0213 18:49:23.785941 2119 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 18:49:23.786159 kubelet[2119]: I0213 18:49:23.785962 2119 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 18:49:23.786370 kubelet[2119]: I0213 18:49:23.786235 2119 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 18:49:23.787646 kubelet[2119]: E0213 18:49:23.787593 2119 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 18:49:23.869947 kubelet[2119]: E0213 18:49:23.869902 2119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="400ms" Feb 13 18:49:23.887701 kubelet[2119]: I0213 18:49:23.887261 2119 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 18:49:23.887701 kubelet[2119]: E0213 18:49:23.887676 2119 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Feb 13 18:49:23.991560 systemd[1]: Created slice kubepods-burstable-podfcb65df29498dafc9f0923d5f078876f.slice - libcontainer container kubepods-burstable-podfcb65df29498dafc9f0923d5f078876f.slice. Feb 13 18:49:24.019152 systemd[1]: Created slice kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice - libcontainer container kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice. Feb 13 18:49:24.019807 kubelet[2119]: W0213 18:49:24.019762 2119 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice/cpuset.cpus.effective": open /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice/cpuset.cpus.effective: no such device Feb 13 18:49:24.038018 systemd[1]: Created slice kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice - libcontainer container kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice. Feb 13 18:49:24.070986 kubelet[2119]: I0213 18:49:24.070894 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:49:24.070986 kubelet[2119]: I0213 18:49:24.070937 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:49:24.070986 kubelet[2119]: I0213 18:49:24.070958 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:49:24.070986 kubelet[2119]: I0213 18:49:24.070978 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fcb65df29498dafc9f0923d5f078876f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fcb65df29498dafc9f0923d5f078876f\") " pod="kube-system/kube-apiserver-localhost" Feb 13 18:49:24.071266 kubelet[2119]: I0213 18:49:24.071005 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:49:24.071266 kubelet[2119]: I0213 18:49:24.071020 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:49:24.071266 kubelet[2119]: I0213 18:49:24.071035 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 18:49:24.071266 kubelet[2119]: I0213 18:49:24.071048 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fcb65df29498dafc9f0923d5f078876f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fcb65df29498dafc9f0923d5f078876f\") " pod="kube-system/kube-apiserver-localhost" Feb 13 18:49:24.071266 kubelet[2119]: I0213 18:49:24.071064 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fcb65df29498dafc9f0923d5f078876f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fcb65df29498dafc9f0923d5f078876f\") " pod="kube-system/kube-apiserver-localhost" Feb 13 18:49:24.089019 kubelet[2119]: I0213 18:49:24.088988 2119 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 18:49:24.089401 kubelet[2119]: E0213 18:49:24.089356 2119 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Feb 13 18:49:24.270623 kubelet[2119]: E0213 18:49:24.270508 2119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="800ms" Feb 13 18:49:24.319146 kubelet[2119]: E0213 18:49:24.319054 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:24.319891 containerd[1444]: time="2025-02-13T18:49:24.319859587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fcb65df29498dafc9f0923d5f078876f,Namespace:kube-system,Attempt:0,}" Feb 13 18:49:24.336138 kubelet[2119]: E0213 18:49:24.336054 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:24.336568 containerd[1444]: time="2025-02-13T18:49:24.336529108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,}" Feb 13 18:49:24.340903 kubelet[2119]: E0213 18:49:24.340877 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:24.341360 containerd[1444]: time="2025-02-13T18:49:24.341326120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,}" Feb 13 18:49:24.491489 kubelet[2119]: I0213 18:49:24.491462 2119 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 18:49:24.491847 kubelet[2119]: E0213 18:49:24.491819 2119 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Feb 13 18:49:24.766103 kubelet[2119]: W0213 18:49:24.765853 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Feb 13 18:49:24.766103 kubelet[2119]: E0213 18:49:24.766061 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:49:24.767435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1132836712.mount: Deactivated successfully. Feb 13 18:49:24.773431 containerd[1444]: time="2025-02-13T18:49:24.773380811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:49:24.774756 containerd[1444]: time="2025-02-13T18:49:24.774725498Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:49:24.775937 containerd[1444]: time="2025-02-13T18:49:24.775886572Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 18:49:24.776618 containerd[1444]: time="2025-02-13T18:49:24.776576484Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 18:49:24.777807 containerd[1444]: time="2025-02-13T18:49:24.777725512Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:49:24.779384 containerd[1444]: time="2025-02-13T18:49:24.779163607Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 18:49:24.781259 containerd[1444]: time="2025-02-13T18:49:24.781209973Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:49:24.783630 containerd[1444]: time="2025-02-13T18:49:24.783550409Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 442.145489ms" Feb 13 18:49:24.784118 containerd[1444]: time="2025-02-13T18:49:24.784059910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:49:24.785002 containerd[1444]: time="2025-02-13T18:49:24.784971055Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 465.026225ms" Feb 13 18:49:24.787296 containerd[1444]: time="2025-02-13T18:49:24.787259545Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 450.648396ms" Feb 13 18:49:24.900916 kubelet[2119]: W0213 18:49:24.900776 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Feb 13 18:49:24.900916 kubelet[2119]: E0213 18:49:24.900874 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:49:24.930099 containerd[1444]: time="2025-02-13T18:49:24.929891013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:49:24.931580 containerd[1444]: time="2025-02-13T18:49:24.931322825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:49:24.931580 containerd[1444]: time="2025-02-13T18:49:24.931365047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:49:24.931580 containerd[1444]: time="2025-02-13T18:49:24.931376092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:49:24.931580 containerd[1444]: time="2025-02-13T18:49:24.931441406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:49:24.931580 containerd[1444]: time="2025-02-13T18:49:24.930962081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:49:24.931580 containerd[1444]: time="2025-02-13T18:49:24.931095909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:49:24.931580 containerd[1444]: time="2025-02-13T18:49:24.931284125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:49:24.935924 containerd[1444]: time="2025-02-13T18:49:24.934234313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:49:24.935924 containerd[1444]: time="2025-02-13T18:49:24.934368102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:49:24.935924 containerd[1444]: time="2025-02-13T18:49:24.934426892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:49:24.935924 containerd[1444]: time="2025-02-13T18:49:24.934549515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:49:24.952162 systemd[1]: Started cri-containerd-1bb399d4a25f0cf011480a215ba0748fd0d852b7c7c857d25f1cb3773fa21a89.scope - libcontainer container 1bb399d4a25f0cf011480a215ba0748fd0d852b7c7c857d25f1cb3773fa21a89. Feb 13 18:49:24.956033 systemd[1]: Started cri-containerd-26c9918fab691ba8fa0f0152ef4110f783741053187fc6a98aca975dc4145f68.scope - libcontainer container 26c9918fab691ba8fa0f0152ef4110f783741053187fc6a98aca975dc4145f68. Feb 13 18:49:24.957320 systemd[1]: Started cri-containerd-f7a41e79b59482777c25058c708f70188a1d43521ec85ef2dca34472a4f9a721.scope - libcontainer container f7a41e79b59482777c25058c708f70188a1d43521ec85ef2dca34472a4f9a721. Feb 13 18:49:24.988731 containerd[1444]: time="2025-02-13T18:49:24.988630479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bb399d4a25f0cf011480a215ba0748fd0d852b7c7c857d25f1cb3773fa21a89\"" Feb 13 18:49:24.990068 kubelet[2119]: E0213 18:49:24.989848 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:24.995975 containerd[1444]: time="2025-02-13T18:49:24.995936213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,} returns sandbox id \"26c9918fab691ba8fa0f0152ef4110f783741053187fc6a98aca975dc4145f68\"" Feb 13 18:49:24.996696 containerd[1444]: time="2025-02-13T18:49:24.996665226Z" level=info msg="CreateContainer within sandbox \"1bb399d4a25f0cf011480a215ba0748fd0d852b7c7c857d25f1cb3773fa21a89\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 18:49:24.996810 kubelet[2119]: E0213 18:49:24.996754 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:24.998632 containerd[1444]: time="2025-02-13T18:49:24.998252797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fcb65df29498dafc9f0923d5f078876f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7a41e79b59482777c25058c708f70188a1d43521ec85ef2dca34472a4f9a721\"" Feb 13 18:49:24.999365 kubelet[2119]: E0213 18:49:24.999342 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:24.999537 containerd[1444]: time="2025-02-13T18:49:24.999506918Z" level=info msg="CreateContainer within sandbox \"26c9918fab691ba8fa0f0152ef4110f783741053187fc6a98aca975dc4145f68\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 18:49:25.001220 containerd[1444]: time="2025-02-13T18:49:25.001189312Z" level=info msg="CreateContainer within sandbox \"f7a41e79b59482777c25058c708f70188a1d43521ec85ef2dca34472a4f9a721\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 18:49:25.058840 containerd[1444]: time="2025-02-13T18:49:25.058589307Z" level=info msg="CreateContainer within sandbox \"26c9918fab691ba8fa0f0152ef4110f783741053187fc6a98aca975dc4145f68\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"40404973c10e088090390599a454900f9ab73fc7c025b8759b099c4193ef01ee\"" Feb 13 18:49:25.059463 containerd[1444]: time="2025-02-13T18:49:25.059434845Z" level=info msg="StartContainer for \"40404973c10e088090390599a454900f9ab73fc7c025b8759b099c4193ef01ee\"" Feb 13 18:49:25.062905 containerd[1444]: time="2025-02-13T18:49:25.062867461Z" level=info msg="CreateContainer within sandbox \"1bb399d4a25f0cf011480a215ba0748fd0d852b7c7c857d25f1cb3773fa21a89\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dda8bb06f70156276052fb2be8bbb18d503e3dd99379ec23b2ddf9de0d1edb14\"" Feb 13 18:49:25.063494 containerd[1444]: time="2025-02-13T18:49:25.063465809Z" level=info msg="StartContainer for \"dda8bb06f70156276052fb2be8bbb18d503e3dd99379ec23b2ddf9de0d1edb14\"" Feb 13 18:49:25.064296 containerd[1444]: time="2025-02-13T18:49:25.064266927Z" level=info msg="CreateContainer within sandbox \"f7a41e79b59482777c25058c708f70188a1d43521ec85ef2dca34472a4f9a721\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d28e93c0414b5294bbc2b8643b2a87948f2f9e214c840186b88b4d3771e72b5f\"" Feb 13 18:49:25.064750 containerd[1444]: time="2025-02-13T18:49:25.064653580Z" level=info msg="StartContainer for \"d28e93c0414b5294bbc2b8643b2a87948f2f9e214c840186b88b4d3771e72b5f\"" Feb 13 18:49:25.071814 kubelet[2119]: E0213 18:49:25.071743 2119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="1.6s" Feb 13 18:49:25.084943 systemd[1]: Started cri-containerd-40404973c10e088090390599a454900f9ab73fc7c025b8759b099c4193ef01ee.scope - libcontainer container 40404973c10e088090390599a454900f9ab73fc7c025b8759b099c4193ef01ee. Feb 13 18:49:25.088527 systemd[1]: Started cri-containerd-d28e93c0414b5294bbc2b8643b2a87948f2f9e214c840186b88b4d3771e72b5f.scope - libcontainer container d28e93c0414b5294bbc2b8643b2a87948f2f9e214c840186b88b4d3771e72b5f. Feb 13 18:49:25.089820 systemd[1]: Started cri-containerd-dda8bb06f70156276052fb2be8bbb18d503e3dd99379ec23b2ddf9de0d1edb14.scope - libcontainer container dda8bb06f70156276052fb2be8bbb18d503e3dd99379ec23b2ddf9de0d1edb14. Feb 13 18:49:25.131057 kubelet[2119]: W0213 18:49:25.130121 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Feb 13 18:49:25.131057 kubelet[2119]: E0213 18:49:25.130187 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:49:25.146719 kubelet[2119]: W0213 18:49:25.141893 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Feb 13 18:49:25.146719 kubelet[2119]: E0213 18:49:25.141965 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Feb 13 18:49:25.153203 containerd[1444]: time="2025-02-13T18:49:25.153145842Z" level=info msg="StartContainer for \"d28e93c0414b5294bbc2b8643b2a87948f2f9e214c840186b88b4d3771e72b5f\" returns successfully" Feb 13 18:49:25.153551 containerd[1444]: time="2025-02-13T18:49:25.153174175Z" level=info msg="StartContainer for \"40404973c10e088090390599a454900f9ab73fc7c025b8759b099c4193ef01ee\" returns successfully" Feb 13 18:49:25.153772 containerd[1444]: time="2025-02-13T18:49:25.153178337Z" level=info msg="StartContainer for \"dda8bb06f70156276052fb2be8bbb18d503e3dd99379ec23b2ddf9de0d1edb14\" returns successfully" Feb 13 18:49:25.293408 kubelet[2119]: I0213 18:49:25.293369 2119 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 18:49:25.293749 kubelet[2119]: E0213 18:49:25.293701 2119 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Feb 13 18:49:25.692980 kubelet[2119]: E0213 18:49:25.692945 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:25.695119 kubelet[2119]: E0213 18:49:25.695056 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:25.696525 kubelet[2119]: E0213 18:49:25.696502 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:26.700226 kubelet[2119]: E0213 18:49:26.700194 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:26.895286 kubelet[2119]: I0213 18:49:26.895002 2119 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 18:49:27.406670 kubelet[2119]: E0213 18:49:27.406629 2119 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 18:49:27.434036 kubelet[2119]: E0213 18:49:27.433930 2119 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823d9149da72b41 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 18:49:23.662695233 +0000 UTC m=+0.557279093,LastTimestamp:2025-02-13 18:49:23.662695233 +0000 UTC m=+0.557279093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 18:49:27.482421 kubelet[2119]: I0213 18:49:27.482376 2119 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 18:49:27.482421 kubelet[2119]: E0213 18:49:27.482417 2119 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 18:49:27.487198 kubelet[2119]: E0213 18:49:27.487083 2119 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823d9149e4be69b default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 18:49:23.673491099 +0000 UTC m=+0.568075000,LastTimestamp:2025-02-13 18:49:23.673491099 +0000 UTC m=+0.568075000,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 18:49:27.548329 kubelet[2119]: E0213 18:49:27.548218 2119 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823d9149f1a9d7d default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 18:49:23.687038333 +0000 UTC m=+0.581622233,LastTimestamp:2025-02-13 18:49:23.687038333 +0000 UTC m=+0.581622233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 18:49:27.602604 kubelet[2119]: E0213 18:49:27.602496 2119 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823d9149f1ab0e0 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 18:49:23.687043296 +0000 UTC m=+0.581627156,LastTimestamp:2025-02-13 18:49:23.687043296 +0000 UTC m=+0.581627156,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 18:49:27.657433 kubelet[2119]: I0213 18:49:27.657236 2119 apiserver.go:52] "Watching apiserver" Feb 13 18:49:27.668990 kubelet[2119]: I0213 18:49:27.668461 2119 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 18:49:29.261380 kubelet[2119]: E0213 18:49:29.261286 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:29.563872 systemd[1]: Reloading requested from client PID 2396 ('systemctl') (unit session-5.scope)... Feb 13 18:49:29.563886 systemd[1]: Reloading... Feb 13 18:49:29.618899 zram_generator::config[2438]: No configuration found. Feb 13 18:49:29.703820 kubelet[2119]: E0213 18:49:29.703708 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:29.765972 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:49:29.833330 systemd[1]: Reloading finished in 269 ms. Feb 13 18:49:29.868769 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:49:29.868938 kubelet[2119]: I0213 18:49:29.868850 2119 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 18:49:29.878685 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 18:49:29.878968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:49:29.889000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:49:29.987567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:49:29.992000 (kubelet)[2477]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 18:49:30.032996 kubelet[2477]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 18:49:30.032996 kubelet[2477]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 18:49:30.032996 kubelet[2477]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 18:49:30.033349 kubelet[2477]: I0213 18:49:30.033059 2477 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 18:49:30.039826 kubelet[2477]: I0213 18:49:30.039771 2477 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 18:49:30.039826 kubelet[2477]: I0213 18:49:30.039821 2477 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 18:49:30.040068 kubelet[2477]: I0213 18:49:30.040038 2477 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 18:49:30.041386 kubelet[2477]: I0213 18:49:30.041356 2477 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 18:49:30.043422 kubelet[2477]: I0213 18:49:30.043400 2477 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 18:49:30.046258 kubelet[2477]: E0213 18:49:30.046221 2477 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 18:49:30.046258 kubelet[2477]: I0213 18:49:30.046253 2477 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 18:49:30.048919 kubelet[2477]: I0213 18:49:30.048837 2477 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 18:49:30.049367 kubelet[2477]: I0213 18:49:30.049007 2477 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 18:49:30.049367 kubelet[2477]: I0213 18:49:30.049101 2477 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 18:49:30.049367 kubelet[2477]: I0213 18:49:30.049128 2477 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 18:49:30.049552 kubelet[2477]: I0213 18:49:30.049381 2477 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 18:49:30.049552 kubelet[2477]: I0213 18:49:30.049390 2477 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 18:49:30.049552 kubelet[2477]: I0213 18:49:30.049424 2477 state_mem.go:36] "Initialized new in-memory state store" Feb 13 18:49:30.049552 kubelet[2477]: I0213 18:49:30.049522 2477 kubelet.go:408] "Attempting to sync node with API server" Feb 13 18:49:30.049552 kubelet[2477]: I0213 18:49:30.049534 2477 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 18:49:30.049687 kubelet[2477]: I0213 18:49:30.049558 2477 kubelet.go:314] "Adding apiserver pod source" Feb 13 18:49:30.049687 kubelet[2477]: I0213 18:49:30.049568 2477 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 18:49:30.051616 kubelet[2477]: I0213 18:49:30.050799 2477 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 18:49:30.051616 kubelet[2477]: I0213 18:49:30.051333 2477 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 18:49:30.052840 kubelet[2477]: I0213 18:49:30.052422 2477 server.go:1269] "Started kubelet" Feb 13 18:49:30.052840 kubelet[2477]: I0213 18:49:30.052544 2477 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 18:49:30.052840 kubelet[2477]: I0213 18:49:30.052698 2477 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 18:49:30.052986 kubelet[2477]: I0213 18:49:30.052932 2477 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 18:49:30.055161 kubelet[2477]: I0213 18:49:30.055138 2477 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 18:49:30.055524 kubelet[2477]: I0213 18:49:30.055499 2477 server.go:460] "Adding debug handlers to kubelet server" Feb 13 18:49:30.057138 kubelet[2477]: I0213 18:49:30.055639 2477 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 18:49:30.058085 kubelet[2477]: I0213 18:49:30.057862 2477 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 18:49:30.058419 kubelet[2477]: I0213 18:49:30.057948 2477 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 18:49:30.059919 kubelet[2477]: I0213 18:49:30.059905 2477 reconciler.go:26] "Reconciler: start to sync state" Feb 13 18:49:30.060177 kubelet[2477]: I0213 18:49:30.060158 2477 factory.go:221] Registration of the systemd container factory successfully Feb 13 18:49:30.060335 kubelet[2477]: I0213 18:49:30.060315 2477 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 18:49:30.060962 kubelet[2477]: E0213 18:49:30.060943 2477 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 18:49:30.061297 kubelet[2477]: E0213 18:49:30.061195 2477 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 18:49:30.066946 kubelet[2477]: I0213 18:49:30.066913 2477 factory.go:221] Registration of the containerd container factory successfully Feb 13 18:49:30.072508 kubelet[2477]: I0213 18:49:30.071218 2477 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 18:49:30.072508 kubelet[2477]: I0213 18:49:30.072178 2477 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 18:49:30.072508 kubelet[2477]: I0213 18:49:30.072196 2477 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 18:49:30.072508 kubelet[2477]: I0213 18:49:30.072211 2477 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 18:49:30.072508 kubelet[2477]: E0213 18:49:30.072248 2477 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 18:49:30.107214 kubelet[2477]: I0213 18:49:30.107190 2477 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 18:49:30.107214 kubelet[2477]: I0213 18:49:30.107208 2477 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 18:49:30.107214 kubelet[2477]: I0213 18:49:30.107227 2477 state_mem.go:36] "Initialized new in-memory state store" Feb 13 18:49:30.107392 kubelet[2477]: I0213 18:49:30.107375 2477 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 18:49:30.107418 kubelet[2477]: I0213 18:49:30.107386 2477 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 18:49:30.107418 kubelet[2477]: I0213 18:49:30.107403 2477 policy_none.go:49] "None policy: Start" Feb 13 18:49:30.107906 kubelet[2477]: I0213 18:49:30.107892 2477 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 18:49:30.107976 kubelet[2477]: I0213 18:49:30.107911 2477 state_mem.go:35] "Initializing new in-memory state store" Feb 13 18:49:30.108062 kubelet[2477]: I0213 18:49:30.108046 2477 state_mem.go:75] "Updated machine memory state" Feb 13 18:49:30.112999 kubelet[2477]: I0213 18:49:30.112973 2477 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 18:49:30.113451 kubelet[2477]: I0213 18:49:30.113134 2477 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 18:49:30.113451 kubelet[2477]: I0213 18:49:30.113155 2477 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 18:49:30.113451 kubelet[2477]: I0213 18:49:30.113340 2477 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 18:49:30.179934 kubelet[2477]: E0213 18:49:30.179892 2477 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 18:49:30.217099 kubelet[2477]: I0213 18:49:30.217051 2477 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 18:49:30.253198 kubelet[2477]: I0213 18:49:30.253150 2477 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Feb 13 18:49:30.253320 kubelet[2477]: I0213 18:49:30.253243 2477 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 18:49:30.260619 kubelet[2477]: I0213 18:49:30.260585 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fcb65df29498dafc9f0923d5f078876f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fcb65df29498dafc9f0923d5f078876f\") " pod="kube-system/kube-apiserver-localhost" Feb 13 18:49:30.260935 kubelet[2477]: I0213 18:49:30.260630 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:49:30.260935 kubelet[2477]: I0213 18:49:30.260648 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:49:30.260935 kubelet[2477]: I0213 18:49:30.260671 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fcb65df29498dafc9f0923d5f078876f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fcb65df29498dafc9f0923d5f078876f\") " pod="kube-system/kube-apiserver-localhost" Feb 13 18:49:30.260935 kubelet[2477]: I0213 18:49:30.260686 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fcb65df29498dafc9f0923d5f078876f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fcb65df29498dafc9f0923d5f078876f\") " pod="kube-system/kube-apiserver-localhost" Feb 13 18:49:30.260935 kubelet[2477]: I0213 18:49:30.260700 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:49:30.261069 kubelet[2477]: I0213 18:49:30.260743 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 18:49:30.261069 kubelet[2477]: I0213 18:49:30.260772 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:49:30.261069 kubelet[2477]: I0213 18:49:30.260811 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 18:49:30.481041 kubelet[2477]: E0213 18:49:30.480931 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:30.481041 kubelet[2477]: E0213 18:49:30.480972 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:30.481041 kubelet[2477]: E0213 18:49:30.480983 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:31.051282 kubelet[2477]: I0213 18:49:31.051189 2477 apiserver.go:52] "Watching apiserver" Feb 13 18:49:31.059971 kubelet[2477]: I0213 18:49:31.059941 2477 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 18:49:31.094003 kubelet[2477]: E0213 18:49:31.093301 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:31.094003 kubelet[2477]: E0213 18:49:31.093980 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:31.094169 kubelet[2477]: E0213 18:49:31.094087 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:31.123346 kubelet[2477]: I0213 18:49:31.123252 2477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.123228888 podStartE2EDuration="2.123228888s" podCreationTimestamp="2025-02-13 18:49:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:49:31.111699298 +0000 UTC m=+1.116302481" watchObservedRunningTime="2025-02-13 18:49:31.123228888 +0000 UTC m=+1.127831991" Feb 13 18:49:31.135206 kubelet[2477]: I0213 18:49:31.135139 2477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.135120243 podStartE2EDuration="1.135120243s" podCreationTimestamp="2025-02-13 18:49:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:49:31.12320556 +0000 UTC m=+1.127808663" watchObservedRunningTime="2025-02-13 18:49:31.135120243 +0000 UTC m=+1.139723346" Feb 13 18:49:31.395409 sudo[1582]: pam_unix(sudo:session): session closed for user root Feb 13 18:49:31.397815 sshd[1581]: Connection closed by 10.0.0.1 port 52360 Feb 13 18:49:31.398022 sshd-session[1579]: pam_unix(sshd:session): session closed for user core Feb 13 18:49:31.401457 systemd[1]: sshd@4-10.0.0.27:22-10.0.0.1:52360.service: Deactivated successfully. Feb 13 18:49:31.404046 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 18:49:31.404199 systemd[1]: session-5.scope: Consumed 6.422s CPU time, 158.3M memory peak, 0B memory swap peak. Feb 13 18:49:31.405982 systemd-logind[1426]: Session 5 logged out. Waiting for processes to exit. Feb 13 18:49:31.407257 systemd-logind[1426]: Removed session 5. Feb 13 18:49:32.095006 kubelet[2477]: E0213 18:49:32.094968 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:33.470382 kubelet[2477]: E0213 18:49:33.470305 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:34.271158 kubelet[2477]: E0213 18:49:34.271089 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:34.469881 kubelet[2477]: I0213 18:49:34.469847 2477 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 18:49:34.470171 containerd[1444]: time="2025-02-13T18:49:34.470134485Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 18:49:34.471318 kubelet[2477]: I0213 18:49:34.470621 2477 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 18:49:34.745198 kubelet[2477]: E0213 18:49:34.745168 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:35.287172 kubelet[2477]: I0213 18:49:35.287117 2477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.287100841 podStartE2EDuration="5.287100841s" podCreationTimestamp="2025-02-13 18:49:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:49:31.135550832 +0000 UTC m=+1.140153936" watchObservedRunningTime="2025-02-13 18:49:35.287100841 +0000 UTC m=+5.291703944" Feb 13 18:49:35.292287 kubelet[2477]: W0213 18:49:35.292182 2477 reflector.go:561] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'localhost' and this object Feb 13 18:49:35.292287 kubelet[2477]: E0213 18:49:35.292228 2477 reflector.go:158] "Unhandled Error" err="object-\"kube-flannel\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Feb 13 18:49:35.294035 kubelet[2477]: I0213 18:49:35.294004 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ab8e3e0-4f03-4177-9a81-a4a375e9ea7e-xtables-lock\") pod \"kube-proxy-rqqjn\" (UID: \"5ab8e3e0-4f03-4177-9a81-a4a375e9ea7e\") " pod="kube-system/kube-proxy-rqqjn" Feb 13 18:49:35.294345 kubelet[2477]: I0213 18:49:35.294211 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qwrc\" (UniqueName: \"kubernetes.io/projected/5ab8e3e0-4f03-4177-9a81-a4a375e9ea7e-kube-api-access-5qwrc\") pod \"kube-proxy-rqqjn\" (UID: \"5ab8e3e0-4f03-4177-9a81-a4a375e9ea7e\") " pod="kube-system/kube-proxy-rqqjn" Feb 13 18:49:35.294345 kubelet[2477]: I0213 18:49:35.294244 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ab8e3e0-4f03-4177-9a81-a4a375e9ea7e-lib-modules\") pod \"kube-proxy-rqqjn\" (UID: \"5ab8e3e0-4f03-4177-9a81-a4a375e9ea7e\") " pod="kube-system/kube-proxy-rqqjn" Feb 13 18:49:35.294345 kubelet[2477]: I0213 18:49:35.294286 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxjj9\" (UniqueName: \"kubernetes.io/projected/809a11fe-8409-400a-853e-26bd87d059ab-kube-api-access-rxjj9\") pod \"kube-flannel-ds-lcp98\" (UID: \"809a11fe-8409-400a-853e-26bd87d059ab\") " pod="kube-flannel/kube-flannel-ds-lcp98" Feb 13 18:49:35.294345 kubelet[2477]: I0213 18:49:35.294305 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5ab8e3e0-4f03-4177-9a81-a4a375e9ea7e-kube-proxy\") pod \"kube-proxy-rqqjn\" (UID: \"5ab8e3e0-4f03-4177-9a81-a4a375e9ea7e\") " pod="kube-system/kube-proxy-rqqjn" Feb 13 18:49:35.294345 kubelet[2477]: I0213 18:49:35.294321 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/809a11fe-8409-400a-853e-26bd87d059ab-run\") pod \"kube-flannel-ds-lcp98\" (UID: \"809a11fe-8409-400a-853e-26bd87d059ab\") " pod="kube-flannel/kube-flannel-ds-lcp98" Feb 13 18:49:35.294613 kubelet[2477]: I0213 18:49:35.294500 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/809a11fe-8409-400a-853e-26bd87d059ab-cni-plugin\") pod \"kube-flannel-ds-lcp98\" (UID: \"809a11fe-8409-400a-853e-26bd87d059ab\") " pod="kube-flannel/kube-flannel-ds-lcp98" Feb 13 18:49:35.294613 kubelet[2477]: I0213 18:49:35.294527 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/809a11fe-8409-400a-853e-26bd87d059ab-cni\") pod \"kube-flannel-ds-lcp98\" (UID: \"809a11fe-8409-400a-853e-26bd87d059ab\") " pod="kube-flannel/kube-flannel-ds-lcp98" Feb 13 18:49:35.294613 kubelet[2477]: I0213 18:49:35.294543 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/809a11fe-8409-400a-853e-26bd87d059ab-flannel-cfg\") pod \"kube-flannel-ds-lcp98\" (UID: \"809a11fe-8409-400a-853e-26bd87d059ab\") " pod="kube-flannel/kube-flannel-ds-lcp98" Feb 13 18:49:35.294613 kubelet[2477]: I0213 18:49:35.294571 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/809a11fe-8409-400a-853e-26bd87d059ab-xtables-lock\") pod \"kube-flannel-ds-lcp98\" (UID: \"809a11fe-8409-400a-853e-26bd87d059ab\") " pod="kube-flannel/kube-flannel-ds-lcp98" Feb 13 18:49:35.299603 systemd[1]: Created slice kubepods-besteffort-pod5ab8e3e0_4f03_4177_9a81_a4a375e9ea7e.slice - libcontainer container kubepods-besteffort-pod5ab8e3e0_4f03_4177_9a81_a4a375e9ea7e.slice. Feb 13 18:49:35.314624 systemd[1]: Created slice kubepods-burstable-pod809a11fe_8409_400a_853e_26bd87d059ab.slice - libcontainer container kubepods-burstable-pod809a11fe_8409_400a_853e_26bd87d059ab.slice. Feb 13 18:49:35.612574 kubelet[2477]: E0213 18:49:35.612237 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:35.613017 containerd[1444]: time="2025-02-13T18:49:35.612680057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rqqjn,Uid:5ab8e3e0-4f03-4177-9a81-a4a375e9ea7e,Namespace:kube-system,Attempt:0,}" Feb 13 18:49:35.630571 containerd[1444]: time="2025-02-13T18:49:35.630485853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:49:35.630702 containerd[1444]: time="2025-02-13T18:49:35.630542429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:49:35.630702 containerd[1444]: time="2025-02-13T18:49:35.630589482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:49:35.630702 containerd[1444]: time="2025-02-13T18:49:35.630673825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:49:35.657135 systemd[1]: Started cri-containerd-870b2d2da065ceaed5e92c307eec0f6c6419d6af230ee350779154441ff0dde5.scope - libcontainer container 870b2d2da065ceaed5e92c307eec0f6c6419d6af230ee350779154441ff0dde5. Feb 13 18:49:35.677452 containerd[1444]: time="2025-02-13T18:49:35.677407552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rqqjn,Uid:5ab8e3e0-4f03-4177-9a81-a4a375e9ea7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"870b2d2da065ceaed5e92c307eec0f6c6419d6af230ee350779154441ff0dde5\"" Feb 13 18:49:35.678299 kubelet[2477]: E0213 18:49:35.678256 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:35.682052 containerd[1444]: time="2025-02-13T18:49:35.681868234Z" level=info msg="CreateContainer within sandbox \"870b2d2da065ceaed5e92c307eec0f6c6419d6af230ee350779154441ff0dde5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 18:49:35.693457 containerd[1444]: time="2025-02-13T18:49:35.693412167Z" level=info msg="CreateContainer within sandbox \"870b2d2da065ceaed5e92c307eec0f6c6419d6af230ee350779154441ff0dde5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6d425fa896a882fb0fef108857880246176db24856b60e621d7cdd840c7e7e98\"" Feb 13 18:49:35.694306 containerd[1444]: time="2025-02-13T18:49:35.694255922Z" level=info msg="StartContainer for \"6d425fa896a882fb0fef108857880246176db24856b60e621d7cdd840c7e7e98\"" Feb 13 18:49:35.720953 systemd[1]: Started cri-containerd-6d425fa896a882fb0fef108857880246176db24856b60e621d7cdd840c7e7e98.scope - libcontainer container 6d425fa896a882fb0fef108857880246176db24856b60e621d7cdd840c7e7e98. Feb 13 18:49:35.750027 containerd[1444]: time="2025-02-13T18:49:35.749986273Z" level=info msg="StartContainer for \"6d425fa896a882fb0fef108857880246176db24856b60e621d7cdd840c7e7e98\" returns successfully" Feb 13 18:49:36.101971 kubelet[2477]: E0213 18:49:36.101948 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:36.517674 kubelet[2477]: E0213 18:49:36.517378 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:36.517915 containerd[1444]: time="2025-02-13T18:49:36.517881841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-lcp98,Uid:809a11fe-8409-400a-853e-26bd87d059ab,Namespace:kube-flannel,Attempt:0,}" Feb 13 18:49:36.537514 containerd[1444]: time="2025-02-13T18:49:36.537119359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:49:36.537514 containerd[1444]: time="2025-02-13T18:49:36.537484735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:49:36.537514 containerd[1444]: time="2025-02-13T18:49:36.537495818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:49:36.537669 containerd[1444]: time="2025-02-13T18:49:36.537568997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:49:36.555943 systemd[1]: Started cri-containerd-b509e044bd7bba8766af99fdac4e5025234efef64076a17e8e643cddb374d57e.scope - libcontainer container b509e044bd7bba8766af99fdac4e5025234efef64076a17e8e643cddb374d57e. Feb 13 18:49:36.580614 containerd[1444]: time="2025-02-13T18:49:36.580554263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-lcp98,Uid:809a11fe-8409-400a-853e-26bd87d059ab,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"b509e044bd7bba8766af99fdac4e5025234efef64076a17e8e643cddb374d57e\"" Feb 13 18:49:36.581342 kubelet[2477]: E0213 18:49:36.581315 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:36.582611 containerd[1444]: time="2025-02-13T18:49:36.582576117Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 18:49:37.890819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1483992253.mount: Deactivated successfully. Feb 13 18:49:37.925465 containerd[1444]: time="2025-02-13T18:49:37.925416023Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:37.926566 containerd[1444]: time="2025-02-13T18:49:37.926517419Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 18:49:37.928774 containerd[1444]: time="2025-02-13T18:49:37.927519630Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:37.930796 containerd[1444]: time="2025-02-13T18:49:37.930736275Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.348103545s" Feb 13 18:49:37.930876 containerd[1444]: time="2025-02-13T18:49:37.930808453Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 18:49:37.931790 containerd[1444]: time="2025-02-13T18:49:37.931741967Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:37.933952 containerd[1444]: time="2025-02-13T18:49:37.933923874Z" level=info msg="CreateContainer within sandbox \"b509e044bd7bba8766af99fdac4e5025234efef64076a17e8e643cddb374d57e\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 18:49:37.944626 containerd[1444]: time="2025-02-13T18:49:37.944568780Z" level=info msg="CreateContainer within sandbox \"b509e044bd7bba8766af99fdac4e5025234efef64076a17e8e643cddb374d57e\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"c83f63938efb10d2304081707bc438f4364f3b3260f09fcc4770fcc1e2c23dfb\"" Feb 13 18:49:37.945269 containerd[1444]: time="2025-02-13T18:49:37.945232546Z" level=info msg="StartContainer for \"c83f63938efb10d2304081707bc438f4364f3b3260f09fcc4770fcc1e2c23dfb\"" Feb 13 18:49:37.971975 systemd[1]: Started cri-containerd-c83f63938efb10d2304081707bc438f4364f3b3260f09fcc4770fcc1e2c23dfb.scope - libcontainer container c83f63938efb10d2304081707bc438f4364f3b3260f09fcc4770fcc1e2c23dfb. Feb 13 18:49:37.992084 containerd[1444]: time="2025-02-13T18:49:37.992020385Z" level=info msg="StartContainer for \"c83f63938efb10d2304081707bc438f4364f3b3260f09fcc4770fcc1e2c23dfb\" returns successfully" Feb 13 18:49:37.998424 systemd[1]: cri-containerd-c83f63938efb10d2304081707bc438f4364f3b3260f09fcc4770fcc1e2c23dfb.scope: Deactivated successfully. Feb 13 18:49:38.035236 containerd[1444]: time="2025-02-13T18:49:38.035159115Z" level=info msg="shim disconnected" id=c83f63938efb10d2304081707bc438f4364f3b3260f09fcc4770fcc1e2c23dfb namespace=k8s.io Feb 13 18:49:38.035236 containerd[1444]: time="2025-02-13T18:49:38.035212408Z" level=warning msg="cleaning up after shim disconnected" id=c83f63938efb10d2304081707bc438f4364f3b3260f09fcc4770fcc1e2c23dfb namespace=k8s.io Feb 13 18:49:38.035236 containerd[1444]: time="2025-02-13T18:49:38.035221130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:49:38.107068 kubelet[2477]: E0213 18:49:38.107006 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:38.108097 containerd[1444]: time="2025-02-13T18:49:38.108060493Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 18:49:38.118329 kubelet[2477]: I0213 18:49:38.118113 2477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rqqjn" podStartSLOduration=3.118095239 podStartE2EDuration="3.118095239s" podCreationTimestamp="2025-02-13 18:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:49:36.110360558 +0000 UTC m=+6.114963741" watchObservedRunningTime="2025-02-13 18:49:38.118095239 +0000 UTC m=+8.122698342" Feb 13 18:49:38.837890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c83f63938efb10d2304081707bc438f4364f3b3260f09fcc4770fcc1e2c23dfb-rootfs.mount: Deactivated successfully. Feb 13 18:49:39.148767 update_engine[1432]: I20250213 18:49:39.148681 1432 update_attempter.cc:509] Updating boot flags... Feb 13 18:49:39.170834 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2855) Feb 13 18:49:39.208914 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2854) Feb 13 18:49:39.227804 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2854) Feb 13 18:49:39.551187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2152886545.mount: Deactivated successfully. Feb 13 18:49:40.067887 containerd[1444]: time="2025-02-13T18:49:40.067844292Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:40.068419 containerd[1444]: time="2025-02-13T18:49:40.068372406Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874259" Feb 13 18:49:40.069180 containerd[1444]: time="2025-02-13T18:49:40.069112885Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:40.072056 containerd[1444]: time="2025-02-13T18:49:40.072027711Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:49:40.073255 containerd[1444]: time="2025-02-13T18:49:40.073134629Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.965031206s" Feb 13 18:49:40.073255 containerd[1444]: time="2025-02-13T18:49:40.073164876Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 18:49:40.075245 containerd[1444]: time="2025-02-13T18:49:40.075214796Z" level=info msg="CreateContainer within sandbox \"b509e044bd7bba8766af99fdac4e5025234efef64076a17e8e643cddb374d57e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 18:49:40.085492 containerd[1444]: time="2025-02-13T18:49:40.085402265Z" level=info msg="CreateContainer within sandbox \"b509e044bd7bba8766af99fdac4e5025234efef64076a17e8e643cddb374d57e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"07f02056c6a1aeae1ab3a7575f2a94677c3e3837bf96815ea07a73ac851cfcd0\"" Feb 13 18:49:40.085909 containerd[1444]: time="2025-02-13T18:49:40.085740458Z" level=info msg="StartContainer for \"07f02056c6a1aeae1ab3a7575f2a94677c3e3837bf96815ea07a73ac851cfcd0\"" Feb 13 18:49:40.120022 systemd[1]: Started cri-containerd-07f02056c6a1aeae1ab3a7575f2a94677c3e3837bf96815ea07a73ac851cfcd0.scope - libcontainer container 07f02056c6a1aeae1ab3a7575f2a94677c3e3837bf96815ea07a73ac851cfcd0. Feb 13 18:49:40.149736 systemd[1]: cri-containerd-07f02056c6a1aeae1ab3a7575f2a94677c3e3837bf96815ea07a73ac851cfcd0.scope: Deactivated successfully. Feb 13 18:49:40.170198 containerd[1444]: time="2025-02-13T18:49:40.170119668Z" level=info msg="StartContainer for \"07f02056c6a1aeae1ab3a7575f2a94677c3e3837bf96815ea07a73ac851cfcd0\" returns successfully" Feb 13 18:49:40.183352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07f02056c6a1aeae1ab3a7575f2a94677c3e3837bf96815ea07a73ac851cfcd0-rootfs.mount: Deactivated successfully. Feb 13 18:49:40.202601 kubelet[2477]: I0213 18:49:40.202557 2477 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 18:49:40.235811 systemd[1]: Created slice kubepods-burstable-pod57f44e7f_ac81_4527_bb0d_bb88f961b68b.slice - libcontainer container kubepods-burstable-pod57f44e7f_ac81_4527_bb0d_bb88f961b68b.slice. Feb 13 18:49:40.240035 systemd[1]: Created slice kubepods-burstable-pod7d3e0cc0_b2bd_4bb8_b4ec_fc073a338562.slice - libcontainer container kubepods-burstable-pod7d3e0cc0_b2bd_4bb8_b4ec_fc073a338562.slice. Feb 13 18:49:40.244877 containerd[1444]: time="2025-02-13T18:49:40.244815518Z" level=info msg="shim disconnected" id=07f02056c6a1aeae1ab3a7575f2a94677c3e3837bf96815ea07a73ac851cfcd0 namespace=k8s.io Feb 13 18:49:40.244877 containerd[1444]: time="2025-02-13T18:49:40.244874411Z" level=warning msg="cleaning up after shim disconnected" id=07f02056c6a1aeae1ab3a7575f2a94677c3e3837bf96815ea07a73ac851cfcd0 namespace=k8s.io Feb 13 18:49:40.245050 containerd[1444]: time="2025-02-13T18:49:40.244884933Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:49:40.334343 kubelet[2477]: I0213 18:49:40.334114 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6gpv\" (UniqueName: \"kubernetes.io/projected/57f44e7f-ac81-4527-bb0d-bb88f961b68b-kube-api-access-k6gpv\") pod \"coredns-6f6b679f8f-4zz9m\" (UID: \"57f44e7f-ac81-4527-bb0d-bb88f961b68b\") " pod="kube-system/coredns-6f6b679f8f-4zz9m" Feb 13 18:49:40.334343 kubelet[2477]: I0213 18:49:40.334197 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d3e0cc0-b2bd-4bb8-b4ec-fc073a338562-config-volume\") pod \"coredns-6f6b679f8f-68w7s\" (UID: \"7d3e0cc0-b2bd-4bb8-b4ec-fc073a338562\") " pod="kube-system/coredns-6f6b679f8f-68w7s" Feb 13 18:49:40.334343 kubelet[2477]: I0213 18:49:40.334221 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57f44e7f-ac81-4527-bb0d-bb88f961b68b-config-volume\") pod \"coredns-6f6b679f8f-4zz9m\" (UID: \"57f44e7f-ac81-4527-bb0d-bb88f961b68b\") " pod="kube-system/coredns-6f6b679f8f-4zz9m" Feb 13 18:49:40.334343 kubelet[2477]: I0213 18:49:40.334276 2477 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt8fs\" (UniqueName: \"kubernetes.io/projected/7d3e0cc0-b2bd-4bb8-b4ec-fc073a338562-kube-api-access-vt8fs\") pod \"coredns-6f6b679f8f-68w7s\" (UID: \"7d3e0cc0-b2bd-4bb8-b4ec-fc073a338562\") " pod="kube-system/coredns-6f6b679f8f-68w7s" Feb 13 18:49:40.538540 kubelet[2477]: E0213 18:49:40.538494 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:40.539248 containerd[1444]: time="2025-02-13T18:49:40.538940477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4zz9m,Uid:57f44e7f-ac81-4527-bb0d-bb88f961b68b,Namespace:kube-system,Attempt:0,}" Feb 13 18:49:40.545164 kubelet[2477]: E0213 18:49:40.544923 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:40.545840 containerd[1444]: time="2025-02-13T18:49:40.545342132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-68w7s,Uid:7d3e0cc0-b2bd-4bb8-b4ec-fc073a338562,Namespace:kube-system,Attempt:0,}" Feb 13 18:49:40.618323 containerd[1444]: time="2025-02-13T18:49:40.618278204Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-68w7s,Uid:7d3e0cc0-b2bd-4bb8-b4ec-fc073a338562,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cbd1e424bd4eea218db65b9208051ef1381f24fc9e9f0fa50bb54a4775341e91\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 18:49:40.618573 kubelet[2477]: E0213 18:49:40.618535 2477 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbd1e424bd4eea218db65b9208051ef1381f24fc9e9f0fa50bb54a4775341e91\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 18:49:40.618621 kubelet[2477]: E0213 18:49:40.618609 2477 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbd1e424bd4eea218db65b9208051ef1381f24fc9e9f0fa50bb54a4775341e91\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-68w7s" Feb 13 18:49:40.619041 containerd[1444]: time="2025-02-13T18:49:40.619004720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4zz9m,Uid:57f44e7f-ac81-4527-bb0d-bb88f961b68b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"afe5de1161cf444a927dfecfb18907ae9af0138b9c15121e572d4ce1330e6ddd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 18:49:40.619193 kubelet[2477]: E0213 18:49:40.619165 2477 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe5de1161cf444a927dfecfb18907ae9af0138b9c15121e572d4ce1330e6ddd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 18:49:40.619235 kubelet[2477]: E0213 18:49:40.619209 2477 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe5de1161cf444a927dfecfb18907ae9af0138b9c15121e572d4ce1330e6ddd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-4zz9m" Feb 13 18:49:40.621329 kubelet[2477]: E0213 18:49:40.621293 2477 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbd1e424bd4eea218db65b9208051ef1381f24fc9e9f0fa50bb54a4775341e91\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-68w7s" Feb 13 18:49:40.621626 kubelet[2477]: E0213 18:49:40.621363 2477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-68w7s_kube-system(7d3e0cc0-b2bd-4bb8-b4ec-fc073a338562)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-68w7s_kube-system(7d3e0cc0-b2bd-4bb8-b4ec-fc073a338562)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbd1e424bd4eea218db65b9208051ef1381f24fc9e9f0fa50bb54a4775341e91\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-68w7s" podUID="7d3e0cc0-b2bd-4bb8-b4ec-fc073a338562" Feb 13 18:49:40.621734 kubelet[2477]: E0213 18:49:40.621662 2477 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe5de1161cf444a927dfecfb18907ae9af0138b9c15121e572d4ce1330e6ddd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-4zz9m" Feb 13 18:49:40.621765 kubelet[2477]: E0213 18:49:40.621729 2477 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-4zz9m_kube-system(57f44e7f-ac81-4527-bb0d-bb88f961b68b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-4zz9m_kube-system(57f44e7f-ac81-4527-bb0d-bb88f961b68b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afe5de1161cf444a927dfecfb18907ae9af0138b9c15121e572d4ce1330e6ddd\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-4zz9m" podUID="57f44e7f-ac81-4527-bb0d-bb88f961b68b" Feb 13 18:49:41.119123 kubelet[2477]: E0213 18:49:41.119072 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:41.128118 containerd[1444]: time="2025-02-13T18:49:41.128074831Z" level=info msg="CreateContainer within sandbox \"b509e044bd7bba8766af99fdac4e5025234efef64076a17e8e643cddb374d57e\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 18:49:41.137689 containerd[1444]: time="2025-02-13T18:49:41.137641627Z" level=info msg="CreateContainer within sandbox \"b509e044bd7bba8766af99fdac4e5025234efef64076a17e8e643cddb374d57e\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"e0968b9bad6d20ed761d670ab8a3599e054dea35b442af7f1d2c0561e2278621\"" Feb 13 18:49:41.138527 containerd[1444]: time="2025-02-13T18:49:41.138083957Z" level=info msg="StartContainer for \"e0968b9bad6d20ed761d670ab8a3599e054dea35b442af7f1d2c0561e2278621\"" Feb 13 18:49:41.164932 systemd[1]: Started cri-containerd-e0968b9bad6d20ed761d670ab8a3599e054dea35b442af7f1d2c0561e2278621.scope - libcontainer container e0968b9bad6d20ed761d670ab8a3599e054dea35b442af7f1d2c0561e2278621. Feb 13 18:49:41.186588 containerd[1444]: time="2025-02-13T18:49:41.186550306Z" level=info msg="StartContainer for \"e0968b9bad6d20ed761d670ab8a3599e054dea35b442af7f1d2c0561e2278621\" returns successfully" Feb 13 18:49:42.083397 systemd[1]: run-containerd-runc-k8s.io-e0968b9bad6d20ed761d670ab8a3599e054dea35b442af7f1d2c0561e2278621-runc.1u36FC.mount: Deactivated successfully. Feb 13 18:49:42.123332 kubelet[2477]: E0213 18:49:42.123301 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:42.135142 kubelet[2477]: I0213 18:49:42.134997 2477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-lcp98" podStartSLOduration=3.642432248 podStartE2EDuration="7.134386722s" podCreationTimestamp="2025-02-13 18:49:35 +0000 UTC" firstStartedPulling="2025-02-13 18:49:36.582121517 +0000 UTC m=+6.586724620" lastFinishedPulling="2025-02-13 18:49:40.074075991 +0000 UTC m=+10.078679094" observedRunningTime="2025-02-13 18:49:42.133999167 +0000 UTC m=+12.138602270" watchObservedRunningTime="2025-02-13 18:49:42.134386722 +0000 UTC m=+12.138989785" Feb 13 18:49:42.312276 systemd-networkd[1382]: flannel.1: Link UP Feb 13 18:49:42.312283 systemd-networkd[1382]: flannel.1: Gained carrier Feb 13 18:49:43.124475 kubelet[2477]: E0213 18:49:43.124432 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:43.482570 kubelet[2477]: E0213 18:49:43.482318 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:44.265972 systemd-networkd[1382]: flannel.1: Gained IPv6LL Feb 13 18:49:44.282394 kubelet[2477]: E0213 18:49:44.280888 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:44.753488 kubelet[2477]: E0213 18:49:44.753459 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:45.127283 kubelet[2477]: E0213 18:49:45.127199 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:52.073675 kubelet[2477]: E0213 18:49:52.073215 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:52.074076 containerd[1444]: time="2025-02-13T18:49:52.073942796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4zz9m,Uid:57f44e7f-ac81-4527-bb0d-bb88f961b68b,Namespace:kube-system,Attempt:0,}" Feb 13 18:49:52.112877 systemd-networkd[1382]: cni0: Link UP Feb 13 18:49:52.112883 systemd-networkd[1382]: cni0: Gained carrier Feb 13 18:49:52.115300 systemd-networkd[1382]: cni0: Lost carrier Feb 13 18:49:52.118394 systemd-networkd[1382]: vethc1c2596a: Link UP Feb 13 18:49:52.120066 kernel: cni0: port 1(vethc1c2596a) entered blocking state Feb 13 18:49:52.120133 kernel: cni0: port 1(vethc1c2596a) entered disabled state Feb 13 18:49:52.120154 kernel: vethc1c2596a: entered allmulticast mode Feb 13 18:49:52.120170 kernel: vethc1c2596a: entered promiscuous mode Feb 13 18:49:52.121364 kernel: cni0: port 1(vethc1c2596a) entered blocking state Feb 13 18:49:52.121418 kernel: cni0: port 1(vethc1c2596a) entered forwarding state Feb 13 18:49:52.123809 kernel: cni0: port 1(vethc1c2596a) entered disabled state Feb 13 18:49:52.134637 systemd-networkd[1382]: vethc1c2596a: Gained carrier Feb 13 18:49:52.134822 kernel: cni0: port 1(vethc1c2596a) entered blocking state Feb 13 18:49:52.134860 kernel: cni0: port 1(vethc1c2596a) entered forwarding state Feb 13 18:49:52.135228 systemd-networkd[1382]: cni0: Gained carrier Feb 13 18:49:52.137135 containerd[1444]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001a938), "name":"cbr0", "type":"bridge"} Feb 13 18:49:52.137135 containerd[1444]: delegateAdd: netconf sent to delegate plugin: Feb 13 18:49:52.152142 containerd[1444]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T18:49:52.152059364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:49:52.152142 containerd[1444]: time="2025-02-13T18:49:52.152111331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:49:52.152142 containerd[1444]: time="2025-02-13T18:49:52.152123292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:49:52.152304 containerd[1444]: time="2025-02-13T18:49:52.152195101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:49:52.170942 systemd[1]: Started cri-containerd-0f88411e8cac87e8230844a6ee115290d681fc85f42f03e32b5fe4cf86b7a3db.scope - libcontainer container 0f88411e8cac87e8230844a6ee115290d681fc85f42f03e32b5fe4cf86b7a3db. Feb 13 18:49:52.180237 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 18:49:52.196525 containerd[1444]: time="2025-02-13T18:49:52.196494120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4zz9m,Uid:57f44e7f-ac81-4527-bb0d-bb88f961b68b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f88411e8cac87e8230844a6ee115290d681fc85f42f03e32b5fe4cf86b7a3db\"" Feb 13 18:49:52.197380 kubelet[2477]: E0213 18:49:52.197199 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:52.199293 containerd[1444]: time="2025-02-13T18:49:52.199258546Z" level=info msg="CreateContainer within sandbox \"0f88411e8cac87e8230844a6ee115290d681fc85f42f03e32b5fe4cf86b7a3db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 18:49:52.230865 containerd[1444]: time="2025-02-13T18:49:52.230814532Z" level=info msg="CreateContainer within sandbox \"0f88411e8cac87e8230844a6ee115290d681fc85f42f03e32b5fe4cf86b7a3db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1dcd2cde8f19e39fa1dcd9361e2e2e7e4d8cc18fb565cd1c35ed72ac76a61823\"" Feb 13 18:49:52.231512 containerd[1444]: time="2025-02-13T18:49:52.231418407Z" level=info msg="StartContainer for \"1dcd2cde8f19e39fa1dcd9361e2e2e7e4d8cc18fb565cd1c35ed72ac76a61823\"" Feb 13 18:49:52.262970 systemd[1]: Started cri-containerd-1dcd2cde8f19e39fa1dcd9361e2e2e7e4d8cc18fb565cd1c35ed72ac76a61823.scope - libcontainer container 1dcd2cde8f19e39fa1dcd9361e2e2e7e4d8cc18fb565cd1c35ed72ac76a61823. Feb 13 18:49:52.285073 containerd[1444]: time="2025-02-13T18:49:52.285029831Z" level=info msg="StartContainer for \"1dcd2cde8f19e39fa1dcd9361e2e2e7e4d8cc18fb565cd1c35ed72ac76a61823\" returns successfully" Feb 13 18:49:53.140726 kubelet[2477]: E0213 18:49:53.140676 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:53.152231 kubelet[2477]: I0213 18:49:53.152154 2477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4zz9m" podStartSLOduration=18.152138569 podStartE2EDuration="18.152138569s" podCreationTimestamp="2025-02-13 18:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:49:53.150859015 +0000 UTC m=+23.155462118" watchObservedRunningTime="2025-02-13 18:49:53.152138569 +0000 UTC m=+23.156741672" Feb 13 18:49:53.737930 systemd-networkd[1382]: cni0: Gained IPv6LL Feb 13 18:49:54.121929 systemd-networkd[1382]: vethc1c2596a: Gained IPv6LL Feb 13 18:49:54.142605 kubelet[2477]: E0213 18:49:54.142564 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:54.662359 systemd[1]: Started sshd@5-10.0.0.27:22-10.0.0.1:37370.service - OpenSSH per-connection server daemon (10.0.0.1:37370). Feb 13 18:49:54.702614 sshd[3296]: Accepted publickey for core from 10.0.0.1 port 37370 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:49:54.703888 sshd-session[3296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:49:54.707911 systemd-logind[1426]: New session 6 of user core. Feb 13 18:49:54.715943 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 18:49:54.834805 sshd[3298]: Connection closed by 10.0.0.1 port 37370 Feb 13 18:49:54.835117 sshd-session[3296]: pam_unix(sshd:session): session closed for user core Feb 13 18:49:54.838293 systemd[1]: sshd@5-10.0.0.27:22-10.0.0.1:37370.service: Deactivated successfully. Feb 13 18:49:54.840606 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 18:49:54.841461 systemd-logind[1426]: Session 6 logged out. Waiting for processes to exit. Feb 13 18:49:54.842311 systemd-logind[1426]: Removed session 6. Feb 13 18:49:55.073092 kubelet[2477]: E0213 18:49:55.072686 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:55.073646 containerd[1444]: time="2025-02-13T18:49:55.073611420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-68w7s,Uid:7d3e0cc0-b2bd-4bb8-b4ec-fc073a338562,Namespace:kube-system,Attempt:0,}" Feb 13 18:49:55.093895 systemd-networkd[1382]: veth35bf29b7: Link UP Feb 13 18:49:55.095979 kernel: cni0: port 2(veth35bf29b7) entered blocking state Feb 13 18:49:55.096168 kernel: cni0: port 2(veth35bf29b7) entered disabled state Feb 13 18:49:55.096189 kernel: veth35bf29b7: entered allmulticast mode Feb 13 18:49:55.098591 kernel: veth35bf29b7: entered promiscuous mode Feb 13 18:49:55.103020 kernel: cni0: port 2(veth35bf29b7) entered blocking state Feb 13 18:49:55.103090 kernel: cni0: port 2(veth35bf29b7) entered forwarding state Feb 13 18:49:55.103098 systemd-networkd[1382]: veth35bf29b7: Gained carrier Feb 13 18:49:55.104277 containerd[1444]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000014938), "name":"cbr0", "type":"bridge"} Feb 13 18:49:55.104277 containerd[1444]: delegateAdd: netconf sent to delegate plugin: Feb 13 18:49:55.119888 containerd[1444]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T18:49:55.119535022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:49:55.120084 containerd[1444]: time="2025-02-13T18:49:55.119909824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:49:55.120084 containerd[1444]: time="2025-02-13T18:49:55.119925305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:49:55.120084 containerd[1444]: time="2025-02-13T18:49:55.120001634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:49:55.144501 kubelet[2477]: E0213 18:49:55.144467 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:55.144953 systemd[1]: Started cri-containerd-da16c884aadf207173eada5284a1d9f4f7513d77fc1e7ed25aabfd705d0a8262.scope - libcontainer container da16c884aadf207173eada5284a1d9f4f7513d77fc1e7ed25aabfd705d0a8262. Feb 13 18:49:55.159087 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 18:49:55.178083 containerd[1444]: time="2025-02-13T18:49:55.178046587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-68w7s,Uid:7d3e0cc0-b2bd-4bb8-b4ec-fc073a338562,Namespace:kube-system,Attempt:0,} returns sandbox id \"da16c884aadf207173eada5284a1d9f4f7513d77fc1e7ed25aabfd705d0a8262\"" Feb 13 18:49:55.178854 kubelet[2477]: E0213 18:49:55.178833 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:55.181349 containerd[1444]: time="2025-02-13T18:49:55.181311791Z" level=info msg="CreateContainer within sandbox \"da16c884aadf207173eada5284a1d9f4f7513d77fc1e7ed25aabfd705d0a8262\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 18:49:55.198104 containerd[1444]: time="2025-02-13T18:49:55.198008133Z" level=info msg="CreateContainer within sandbox \"da16c884aadf207173eada5284a1d9f4f7513d77fc1e7ed25aabfd705d0a8262\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"58e5651fb36ebbb9238e74885379164515f7d5fb299df7e1400f2064e91d027d\"" Feb 13 18:49:55.198481 containerd[1444]: time="2025-02-13T18:49:55.198454543Z" level=info msg="StartContainer for \"58e5651fb36ebbb9238e74885379164515f7d5fb299df7e1400f2064e91d027d\"" Feb 13 18:49:55.222921 systemd[1]: Started cri-containerd-58e5651fb36ebbb9238e74885379164515f7d5fb299df7e1400f2064e91d027d.scope - libcontainer container 58e5651fb36ebbb9238e74885379164515f7d5fb299df7e1400f2064e91d027d. Feb 13 18:49:55.243318 containerd[1444]: time="2025-02-13T18:49:55.243248458Z" level=info msg="StartContainer for \"58e5651fb36ebbb9238e74885379164515f7d5fb299df7e1400f2064e91d027d\" returns successfully" Feb 13 18:49:56.086050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1466111230.mount: Deactivated successfully. Feb 13 18:49:56.147363 kubelet[2477]: E0213 18:49:56.147284 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:56.158700 kubelet[2477]: I0213 18:49:56.158639 2477 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-68w7s" podStartSLOduration=21.15862372 podStartE2EDuration="21.15862372s" podCreationTimestamp="2025-02-13 18:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:49:56.158577155 +0000 UTC m=+26.163180338" watchObservedRunningTime="2025-02-13 18:49:56.15862372 +0000 UTC m=+26.163226783" Feb 13 18:49:56.426003 systemd-networkd[1382]: veth35bf29b7: Gained IPv6LL Feb 13 18:49:57.148642 kubelet[2477]: E0213 18:49:57.148616 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:58.149928 kubelet[2477]: E0213 18:49:58.149886 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 18:49:59.848241 systemd[1]: Started sshd@6-10.0.0.27:22-10.0.0.1:37384.service - OpenSSH per-connection server daemon (10.0.0.1:37384). Feb 13 18:49:59.888464 sshd[3447]: Accepted publickey for core from 10.0.0.1 port 37384 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:49:59.889703 sshd-session[3447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:49:59.893011 systemd-logind[1426]: New session 7 of user core. Feb 13 18:49:59.899943 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 18:50:00.012673 sshd[3449]: Connection closed by 10.0.0.1 port 37384 Feb 13 18:50:00.013195 sshd-session[3447]: pam_unix(sshd:session): session closed for user core Feb 13 18:50:00.016234 systemd[1]: sshd@6-10.0.0.27:22-10.0.0.1:37384.service: Deactivated successfully. Feb 13 18:50:00.017882 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 18:50:00.018369 systemd-logind[1426]: Session 7 logged out. Waiting for processes to exit. Feb 13 18:50:00.019796 systemd-logind[1426]: Removed session 7. Feb 13 18:50:05.026401 systemd[1]: Started sshd@7-10.0.0.27:22-10.0.0.1:37142.service - OpenSSH per-connection server daemon (10.0.0.1:37142). Feb 13 18:50:05.066127 sshd[3487]: Accepted publickey for core from 10.0.0.1 port 37142 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:50:05.067339 sshd-session[3487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:50:05.071400 systemd-logind[1426]: New session 8 of user core. Feb 13 18:50:05.081927 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 18:50:05.190647 sshd[3489]: Connection closed by 10.0.0.1 port 37142 Feb 13 18:50:05.191236 sshd-session[3487]: pam_unix(sshd:session): session closed for user core Feb 13 18:50:05.202278 systemd[1]: sshd@7-10.0.0.27:22-10.0.0.1:37142.service: Deactivated successfully. Feb 13 18:50:05.203864 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 18:50:05.205111 systemd-logind[1426]: Session 8 logged out. Waiting for processes to exit. Feb 13 18:50:05.206266 systemd[1]: Started sshd@8-10.0.0.27:22-10.0.0.1:37144.service - OpenSSH per-connection server daemon (10.0.0.1:37144). Feb 13 18:50:05.207175 systemd-logind[1426]: Removed session 8. Feb 13 18:50:05.245549 sshd[3502]: Accepted publickey for core from 10.0.0.1 port 37144 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:50:05.246670 sshd-session[3502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:50:05.250486 systemd-logind[1426]: New session 9 of user core. Feb 13 18:50:05.260937 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 18:50:05.407635 sshd[3504]: Connection closed by 10.0.0.1 port 37144 Feb 13 18:50:05.408048 sshd-session[3502]: pam_unix(sshd:session): session closed for user core Feb 13 18:50:05.420227 systemd[1]: sshd@8-10.0.0.27:22-10.0.0.1:37144.service: Deactivated successfully. Feb 13 18:50:05.425096 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 18:50:05.429104 systemd-logind[1426]: Session 9 logged out. Waiting for processes to exit. Feb 13 18:50:05.442199 systemd[1]: Started sshd@9-10.0.0.27:22-10.0.0.1:37158.service - OpenSSH per-connection server daemon (10.0.0.1:37158). Feb 13 18:50:05.443182 systemd-logind[1426]: Removed session 9. Feb 13 18:50:05.478764 sshd[3515]: Accepted publickey for core from 10.0.0.1 port 37158 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:50:05.480021 sshd-session[3515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:50:05.483844 systemd-logind[1426]: New session 10 of user core. Feb 13 18:50:05.492931 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 18:50:05.606951 sshd[3517]: Connection closed by 10.0.0.1 port 37158 Feb 13 18:50:05.607317 sshd-session[3515]: pam_unix(sshd:session): session closed for user core Feb 13 18:50:05.610060 systemd-logind[1426]: Session 10 logged out. Waiting for processes to exit. Feb 13 18:50:05.610325 systemd[1]: sshd@9-10.0.0.27:22-10.0.0.1:37158.service: Deactivated successfully. Feb 13 18:50:05.612626 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 18:50:05.614470 systemd-logind[1426]: Removed session 10. Feb 13 18:50:10.621376 systemd[1]: Started sshd@10-10.0.0.27:22-10.0.0.1:37172.service - OpenSSH per-connection server daemon (10.0.0.1:37172). Feb 13 18:50:10.661240 sshd[3552]: Accepted publickey for core from 10.0.0.1 port 37172 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:50:10.662436 sshd-session[3552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:50:10.666273 systemd-logind[1426]: New session 11 of user core. Feb 13 18:50:10.679935 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 18:50:10.788936 sshd[3554]: Connection closed by 10.0.0.1 port 37172 Feb 13 18:50:10.789292 sshd-session[3552]: pam_unix(sshd:session): session closed for user core Feb 13 18:50:10.802182 systemd[1]: sshd@10-10.0.0.27:22-10.0.0.1:37172.service: Deactivated successfully. Feb 13 18:50:10.803582 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 18:50:10.804816 systemd-logind[1426]: Session 11 logged out. Waiting for processes to exit. Feb 13 18:50:10.806537 systemd[1]: Started sshd@11-10.0.0.27:22-10.0.0.1:37178.service - OpenSSH per-connection server daemon (10.0.0.1:37178). Feb 13 18:50:10.807335 systemd-logind[1426]: Removed session 11. Feb 13 18:50:10.846471 sshd[3567]: Accepted publickey for core from 10.0.0.1 port 37178 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:50:10.847706 sshd-session[3567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:50:10.851236 systemd-logind[1426]: New session 12 of user core. Feb 13 18:50:10.857909 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 18:50:11.051885 sshd[3569]: Connection closed by 10.0.0.1 port 37178 Feb 13 18:50:11.054421 sshd-session[3567]: pam_unix(sshd:session): session closed for user core Feb 13 18:50:11.063025 systemd[1]: sshd@11-10.0.0.27:22-10.0.0.1:37178.service: Deactivated successfully. Feb 13 18:50:11.064330 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 18:50:11.066444 systemd-logind[1426]: Session 12 logged out. Waiting for processes to exit. Feb 13 18:50:11.067942 systemd[1]: Started sshd@12-10.0.0.27:22-10.0.0.1:37182.service - OpenSSH per-connection server daemon (10.0.0.1:37182). Feb 13 18:50:11.068827 systemd-logind[1426]: Removed session 12. Feb 13 18:50:11.111457 sshd[3579]: Accepted publickey for core from 10.0.0.1 port 37182 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:50:11.112908 sshd-session[3579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:50:11.118068 systemd-logind[1426]: New session 13 of user core. Feb 13 18:50:11.126149 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 18:50:12.336613 sshd[3581]: Connection closed by 10.0.0.1 port 37182 Feb 13 18:50:12.337319 sshd-session[3579]: pam_unix(sshd:session): session closed for user core Feb 13 18:50:12.347280 systemd[1]: sshd@12-10.0.0.27:22-10.0.0.1:37182.service: Deactivated successfully. Feb 13 18:50:12.348690 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 18:50:12.351139 systemd-logind[1426]: Session 13 logged out. Waiting for processes to exit. Feb 13 18:50:12.356083 systemd[1]: Started sshd@13-10.0.0.27:22-10.0.0.1:37186.service - OpenSSH per-connection server daemon (10.0.0.1:37186). Feb 13 18:50:12.357441 systemd-logind[1426]: Removed session 13. Feb 13 18:50:12.414158 sshd[3603]: Accepted publickey for core from 10.0.0.1 port 37186 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:50:12.415481 sshd-session[3603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:50:12.420582 systemd-logind[1426]: New session 14 of user core. Feb 13 18:50:12.437979 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 18:50:12.658479 sshd[3611]: Connection closed by 10.0.0.1 port 37186 Feb 13 18:50:12.661591 sshd-session[3603]: pam_unix(sshd:session): session closed for user core Feb 13 18:50:12.668966 systemd[1]: sshd@13-10.0.0.27:22-10.0.0.1:37186.service: Deactivated successfully. Feb 13 18:50:12.670474 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 18:50:12.672693 systemd-logind[1426]: Session 14 logged out. Waiting for processes to exit. Feb 13 18:50:12.678048 systemd[1]: Started sshd@14-10.0.0.27:22-10.0.0.1:60718.service - OpenSSH per-connection server daemon (10.0.0.1:60718). Feb 13 18:50:12.679003 systemd-logind[1426]: Removed session 14. Feb 13 18:50:12.714869 sshd[3637]: Accepted publickey for core from 10.0.0.1 port 60718 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:50:12.716190 sshd-session[3637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:50:12.719822 systemd-logind[1426]: New session 15 of user core. Feb 13 18:50:12.730958 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 18:50:12.838273 sshd[3639]: Connection closed by 10.0.0.1 port 60718 Feb 13 18:50:12.838688 sshd-session[3637]: pam_unix(sshd:session): session closed for user core Feb 13 18:50:12.841798 systemd[1]: sshd@14-10.0.0.27:22-10.0.0.1:60718.service: Deactivated successfully. Feb 13 18:50:12.843478 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 18:50:12.844203 systemd-logind[1426]: Session 15 logged out. Waiting for processes to exit. Feb 13 18:50:12.845132 systemd-logind[1426]: Removed session 15. Feb 13 18:50:17.851493 systemd[1]: Started sshd@15-10.0.0.27:22-10.0.0.1:60730.service - OpenSSH per-connection server daemon (10.0.0.1:60730). Feb 13 18:50:17.898479 sshd[3675]: Accepted publickey for core from 10.0.0.1 port 60730 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:50:17.899649 sshd-session[3675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:50:17.903392 systemd-logind[1426]: New session 16 of user core. Feb 13 18:50:17.918191 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 18:50:18.024402 sshd[3677]: Connection closed by 10.0.0.1 port 60730 Feb 13 18:50:18.024742 sshd-session[3675]: pam_unix(sshd:session): session closed for user core Feb 13 18:50:18.027840 systemd[1]: sshd@15-10.0.0.27:22-10.0.0.1:60730.service: Deactivated successfully. Feb 13 18:50:18.029482 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 18:50:18.030166 systemd-logind[1426]: Session 16 logged out. Waiting for processes to exit. Feb 13 18:50:18.031126 systemd-logind[1426]: Removed session 16. Feb 13 18:50:23.036236 systemd[1]: Started sshd@16-10.0.0.27:22-10.0.0.1:46528.service - OpenSSH per-connection server daemon (10.0.0.1:46528). Feb 13 18:50:23.075432 sshd[3711]: Accepted publickey for core from 10.0.0.1 port 46528 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:50:23.076482 sshd-session[3711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:50:23.079841 systemd-logind[1426]: New session 17 of user core. Feb 13 18:50:23.085941 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 18:50:23.193123 sshd[3713]: Connection closed by 10.0.0.1 port 46528 Feb 13 18:50:23.194054 sshd-session[3711]: pam_unix(sshd:session): session closed for user core Feb 13 18:50:23.197019 systemd[1]: sshd@16-10.0.0.27:22-10.0.0.1:46528.service: Deactivated successfully. Feb 13 18:50:23.198935 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 18:50:23.199679 systemd-logind[1426]: Session 17 logged out. Waiting for processes to exit. Feb 13 18:50:23.200900 systemd-logind[1426]: Removed session 17. Feb 13 18:50:28.205109 systemd[1]: Started sshd@17-10.0.0.27:22-10.0.0.1:46532.service - OpenSSH per-connection server daemon (10.0.0.1:46532). Feb 13 18:50:28.249972 sshd[3747]: Accepted publickey for core from 10.0.0.1 port 46532 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 18:50:28.251228 sshd-session[3747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:50:28.255884 systemd-logind[1426]: New session 18 of user core. Feb 13 18:50:28.267979 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 18:50:28.379027 sshd[3749]: Connection closed by 10.0.0.1 port 46532 Feb 13 18:50:28.379384 sshd-session[3747]: pam_unix(sshd:session): session closed for user core Feb 13 18:50:28.381775 systemd[1]: sshd@17-10.0.0.27:22-10.0.0.1:46532.service: Deactivated successfully. Feb 13 18:50:28.384040 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 18:50:28.385973 systemd-logind[1426]: Session 18 logged out. Waiting for processes to exit. Feb 13 18:50:28.387288 systemd-logind[1426]: Removed session 18.