Feb 13 15:18:07.921276 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:18:07.921296 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:02:42 -00 2025 Feb 13 15:18:07.921306 kernel: KASLR enabled Feb 13 15:18:07.921311 kernel: efi: EFI v2.7 by EDK II Feb 13 15:18:07.921317 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Feb 13 15:18:07.921322 kernel: random: crng init done Feb 13 15:18:07.921329 kernel: secureboot: Secure boot disabled Feb 13 15:18:07.921335 kernel: ACPI: Early table checksum verification disabled Feb 13 15:18:07.921341 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 15:18:07.921348 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:18:07.921354 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:18:07.921359 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:18:07.921365 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:18:07.921371 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:18:07.921378 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:18:07.921386 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:18:07.921392 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:18:07.921398 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:18:07.921404 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:18:07.921410 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 15:18:07.921416 kernel: NUMA: Failed to initialise from firmware Feb 13 15:18:07.921423 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:18:07.921429 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Feb 13 15:18:07.921435 kernel: Zone ranges: Feb 13 15:18:07.921441 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:18:07.921448 kernel: DMA32 empty Feb 13 15:18:07.921454 kernel: Normal empty Feb 13 15:18:07.921460 kernel: Movable zone start for each node Feb 13 15:18:07.921466 kernel: Early memory node ranges Feb 13 15:18:07.921472 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Feb 13 15:18:07.921478 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Feb 13 15:18:07.921484 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Feb 13 15:18:07.921490 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 15:18:07.921497 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 15:18:07.921503 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 15:18:07.921509 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 15:18:07.921515 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 15:18:07.921522 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 15:18:07.921528 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:18:07.921534 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 15:18:07.921543 kernel: psci: probing for conduit method from ACPI. Feb 13 15:18:07.921549 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:18:07.921556 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:18:07.921563 kernel: psci: Trusted OS migration not required Feb 13 15:18:07.921570 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:18:07.921577 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:18:07.921583 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:18:07.921590 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:18:07.921596 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 15:18:07.921603 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:18:07.921609 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:18:07.921616 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:18:07.921622 kernel: CPU features: detected: Spectre-v4 Feb 13 15:18:07.921630 kernel: CPU features: detected: Spectre-BHB Feb 13 15:18:07.921636 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:18:07.921643 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:18:07.921649 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:18:07.921656 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:18:07.921662 kernel: alternatives: applying boot alternatives Feb 13 15:18:07.921670 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:18:07.921677 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:18:07.921684 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:18:07.921691 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:18:07.921697 kernel: Fallback order for Node 0: 0 Feb 13 15:18:07.921705 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 15:18:07.921712 kernel: Policy zone: DMA Feb 13 15:18:07.921718 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:18:07.921725 kernel: software IO TLB: area num 4. Feb 13 15:18:07.921732 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 15:18:07.921739 kernel: Memory: 2385936K/2572288K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 186352K reserved, 0K cma-reserved) Feb 13 15:18:07.921745 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:18:07.921752 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:18:07.921759 kernel: rcu: RCU event tracing is enabled. Feb 13 15:18:07.921765 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:18:07.921772 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:18:07.921779 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:18:07.921787 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:18:07.921793 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:18:07.921800 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:18:07.921807 kernel: GICv3: 256 SPIs implemented Feb 13 15:18:07.921813 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:18:07.921820 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:18:07.921826 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:18:07.921833 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:18:07.921840 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:18:07.921846 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:18:07.921853 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:18:07.921861 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 15:18:07.921868 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 15:18:07.921874 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:18:07.921881 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:18:07.921887 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:18:07.921894 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:18:07.921901 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:18:07.921907 kernel: arm-pv: using stolen time PV Feb 13 15:18:07.921914 kernel: Console: colour dummy device 80x25 Feb 13 15:18:07.921921 kernel: ACPI: Core revision 20230628 Feb 13 15:18:07.921928 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:18:07.922013 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:18:07.922023 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:18:07.922030 kernel: landlock: Up and running. Feb 13 15:18:07.922037 kernel: SELinux: Initializing. Feb 13 15:18:07.922043 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:18:07.922050 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:18:07.922057 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:18:07.922064 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:18:07.922071 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:18:07.922081 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:18:07.922087 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:18:07.922094 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:18:07.922101 kernel: Remapping and enabling EFI services. Feb 13 15:18:07.922108 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:18:07.922115 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:18:07.922122 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:18:07.922129 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 15:18:07.922135 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:18:07.922143 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:18:07.922150 kernel: Detected PIPT I-cache on CPU2 Feb 13 15:18:07.922162 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 15:18:07.922170 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 15:18:07.922178 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:18:07.922185 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 15:18:07.922192 kernel: Detected PIPT I-cache on CPU3 Feb 13 15:18:07.922199 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 15:18:07.922206 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 15:18:07.922215 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:18:07.922222 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 15:18:07.922229 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:18:07.922236 kernel: SMP: Total of 4 processors activated. Feb 13 15:18:07.922243 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:18:07.922250 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:18:07.922257 kernel: CPU features: detected: Common not Private translations Feb 13 15:18:07.922264 kernel: CPU features: detected: CRC32 instructions Feb 13 15:18:07.922272 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:18:07.922279 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:18:07.922286 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:18:07.922293 kernel: CPU features: detected: Privileged Access Never Feb 13 15:18:07.922300 kernel: CPU features: detected: RAS Extension Support Feb 13 15:18:07.922308 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:18:07.922315 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:18:07.922322 kernel: alternatives: applying system-wide alternatives Feb 13 15:18:07.922329 kernel: devtmpfs: initialized Feb 13 15:18:07.922337 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:18:07.922346 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:18:07.922352 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:18:07.922359 kernel: SMBIOS 3.0.0 present. Feb 13 15:18:07.922366 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 15:18:07.922373 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:18:07.922381 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:18:07.922388 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:18:07.922395 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:18:07.922402 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:18:07.922410 kernel: audit: type=2000 audit(0.026:1): state=initialized audit_enabled=0 res=1 Feb 13 15:18:07.922417 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:18:07.922424 kernel: cpuidle: using governor menu Feb 13 15:18:07.922431 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:18:07.922438 kernel: ASID allocator initialised with 32768 entries Feb 13 15:18:07.922445 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:18:07.922452 kernel: Serial: AMBA PL011 UART driver Feb 13 15:18:07.922459 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:18:07.922466 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:18:07.922474 kernel: Modules: 508880 pages in range for PLT usage Feb 13 15:18:07.922482 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:18:07.922489 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:18:07.922496 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:18:07.922503 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:18:07.922510 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:18:07.922517 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:18:07.922524 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:18:07.922532 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:18:07.922539 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:18:07.922546 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:18:07.922553 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:18:07.922561 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:18:07.922568 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:18:07.922575 kernel: ACPI: Interpreter enabled Feb 13 15:18:07.922582 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:18:07.922603 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:18:07.922611 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:18:07.922619 kernel: printk: console [ttyAMA0] enabled Feb 13 15:18:07.922628 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:18:07.922764 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:18:07.922836 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:18:07.922899 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:18:07.922983 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:18:07.923047 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:18:07.923060 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:18:07.923067 kernel: PCI host bridge to bus 0000:00 Feb 13 15:18:07.923135 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:18:07.923192 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:18:07.923249 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:18:07.923304 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:18:07.923386 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:18:07.923464 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:18:07.923529 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 15:18:07.923593 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 15:18:07.923661 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:18:07.923736 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:18:07.923802 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 15:18:07.923866 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 15:18:07.923926 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:18:07.924020 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:18:07.924078 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:18:07.924088 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:18:07.924095 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:18:07.924103 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:18:07.924110 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:18:07.924120 kernel: iommu: Default domain type: Translated Feb 13 15:18:07.924127 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:18:07.924134 kernel: efivars: Registered efivars operations Feb 13 15:18:07.924141 kernel: vgaarb: loaded Feb 13 15:18:07.924148 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:18:07.924155 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:18:07.924162 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:18:07.924169 kernel: pnp: PnP ACPI init Feb 13 15:18:07.924240 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:18:07.924253 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:18:07.924260 kernel: NET: Registered PF_INET protocol family Feb 13 15:18:07.924267 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:18:07.924274 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:18:07.924281 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:18:07.924288 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:18:07.924296 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:18:07.924303 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:18:07.924312 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:18:07.924319 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:18:07.924326 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:18:07.924333 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:18:07.924340 kernel: kvm [1]: HYP mode not available Feb 13 15:18:07.924347 kernel: Initialise system trusted keyrings Feb 13 15:18:07.924354 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:18:07.924361 kernel: Key type asymmetric registered Feb 13 15:18:07.924368 kernel: Asymmetric key parser 'x509' registered Feb 13 15:18:07.924375 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:18:07.924384 kernel: io scheduler mq-deadline registered Feb 13 15:18:07.924391 kernel: io scheduler kyber registered Feb 13 15:18:07.924398 kernel: io scheduler bfq registered Feb 13 15:18:07.924405 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:18:07.924412 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:18:07.924420 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:18:07.924487 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 15:18:07.924501 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:18:07.924508 kernel: thunder_xcv, ver 1.0 Feb 13 15:18:07.924518 kernel: thunder_bgx, ver 1.0 Feb 13 15:18:07.924525 kernel: nicpf, ver 1.0 Feb 13 15:18:07.924534 kernel: nicvf, ver 1.0 Feb 13 15:18:07.924622 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:18:07.924692 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:18:07 UTC (1739459887) Feb 13 15:18:07.924709 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:18:07.924720 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:18:07.924732 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:18:07.924741 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:18:07.924748 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:18:07.924756 kernel: Segment Routing with IPv6 Feb 13 15:18:07.924762 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:18:07.924770 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:18:07.924777 kernel: Key type dns_resolver registered Feb 13 15:18:07.924783 kernel: registered taskstats version 1 Feb 13 15:18:07.924790 kernel: Loading compiled-in X.509 certificates Feb 13 15:18:07.924798 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 62d673f884efd54b6d6ef802a9b879413c8a346e' Feb 13 15:18:07.924806 kernel: Key type .fscrypt registered Feb 13 15:18:07.924813 kernel: Key type fscrypt-provisioning registered Feb 13 15:18:07.924820 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:18:07.924827 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:18:07.924834 kernel: ima: No architecture policies found Feb 13 15:18:07.924841 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:18:07.924848 kernel: clk: Disabling unused clocks Feb 13 15:18:07.924855 kernel: Freeing unused kernel memory: 39936K Feb 13 15:18:07.924862 kernel: Run /init as init process Feb 13 15:18:07.924870 kernel: with arguments: Feb 13 15:18:07.924877 kernel: /init Feb 13 15:18:07.924884 kernel: with environment: Feb 13 15:18:07.924891 kernel: HOME=/ Feb 13 15:18:07.924898 kernel: TERM=linux Feb 13 15:18:07.924905 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:18:07.924913 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:18:07.924922 systemd[1]: Detected virtualization kvm. Feb 13 15:18:07.924937 systemd[1]: Detected architecture arm64. Feb 13 15:18:07.924959 systemd[1]: Running in initrd. Feb 13 15:18:07.924967 systemd[1]: No hostname configured, using default hostname. Feb 13 15:18:07.924974 systemd[1]: Hostname set to . Feb 13 15:18:07.924982 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:18:07.924989 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:18:07.924997 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:18:07.925005 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:18:07.925015 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:18:07.925023 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:18:07.925031 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:18:07.925038 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:18:07.925047 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:18:07.925055 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:18:07.925064 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:18:07.925074 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:18:07.925089 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:18:07.925097 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:18:07.925104 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:18:07.925112 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:18:07.925119 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:18:07.925127 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:18:07.925135 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:18:07.925144 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:18:07.925151 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:18:07.925159 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:18:07.925167 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:18:07.925174 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:18:07.925182 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:18:07.925189 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:18:07.925197 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:18:07.925206 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:18:07.925213 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:18:07.925221 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:18:07.925229 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:18:07.925236 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:18:07.925244 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:18:07.925251 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:18:07.925261 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:18:07.925269 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:18:07.925277 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:18:07.925302 systemd-journald[240]: Collecting audit messages is disabled. Feb 13 15:18:07.925323 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:18:07.925330 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:18:07.925339 systemd-journald[240]: Journal started Feb 13 15:18:07.925362 systemd-journald[240]: Runtime Journal (/run/log/journal/6e9f05fc908a4c858f6cb719b780f2f8) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:18:07.930072 kernel: Bridge firewalling registered Feb 13 15:18:07.908891 systemd-modules-load[241]: Inserted module 'overlay' Feb 13 15:18:07.928049 systemd-modules-load[241]: Inserted module 'br_netfilter' Feb 13 15:18:07.933725 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:18:07.933745 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:18:07.935089 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:18:07.938574 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:18:07.940992 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:18:07.941989 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:18:07.950497 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:18:07.951653 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:18:07.953877 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:18:07.967161 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:18:07.969263 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:18:07.979272 dracut-cmdline[277]: dracut-dracut-053 Feb 13 15:18:07.981701 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:18:07.994527 systemd-resolved[279]: Positive Trust Anchors: Feb 13 15:18:07.994547 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:18:07.994578 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:18:07.999118 systemd-resolved[279]: Defaulting to hostname 'linux'. Feb 13 15:18:08.000215 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:18:08.002918 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:18:08.061977 kernel: SCSI subsystem initialized Feb 13 15:18:08.066956 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:18:08.074978 kernel: iscsi: registered transport (tcp) Feb 13 15:18:08.091965 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:18:08.091981 kernel: QLogic iSCSI HBA Driver Feb 13 15:18:08.139061 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:18:08.154121 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:18:08.170802 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:18:08.170850 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:18:08.172044 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:18:08.223009 kernel: raid6: neonx8 gen() 13196 MB/s Feb 13 15:18:08.239986 kernel: raid6: neonx4 gen() 13400 MB/s Feb 13 15:18:08.256963 kernel: raid6: neonx2 gen() 13167 MB/s Feb 13 15:18:08.273961 kernel: raid6: neonx1 gen() 10470 MB/s Feb 13 15:18:08.290965 kernel: raid6: int64x8 gen() 6767 MB/s Feb 13 15:18:08.307967 kernel: raid6: int64x4 gen() 7240 MB/s Feb 13 15:18:08.324962 kernel: raid6: int64x2 gen() 6106 MB/s Feb 13 15:18:08.341962 kernel: raid6: int64x1 gen() 5058 MB/s Feb 13 15:18:08.341976 kernel: raid6: using algorithm neonx4 gen() 13400 MB/s Feb 13 15:18:08.358966 kernel: raid6: .... xor() 12540 MB/s, rmw enabled Feb 13 15:18:08.358979 kernel: raid6: using neon recovery algorithm Feb 13 15:18:08.363964 kernel: xor: measuring software checksum speed Feb 13 15:18:08.363981 kernel: 8regs : 21630 MB/sec Feb 13 15:18:08.363992 kernel: 32regs : 19697 MB/sec Feb 13 15:18:08.365310 kernel: arm64_neon : 27927 MB/sec Feb 13 15:18:08.365322 kernel: xor: using function: arm64_neon (27927 MB/sec) Feb 13 15:18:08.418978 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:18:08.429428 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:18:08.451137 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:18:08.462323 systemd-udevd[463]: Using default interface naming scheme 'v255'. Feb 13 15:18:08.465438 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:18:08.468403 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:18:08.481882 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Feb 13 15:18:08.508251 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:18:08.517075 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:18:08.555819 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:18:08.563124 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:18:08.574996 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:18:08.576220 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:18:08.578901 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:18:08.580653 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:18:08.586277 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:18:08.596960 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 15:18:08.610741 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:18:08.610853 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:18:08.610867 kernel: GPT:9289727 != 19775487 Feb 13 15:18:08.610877 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:18:08.610887 kernel: GPT:9289727 != 19775487 Feb 13 15:18:08.610896 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:18:08.610906 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:18:08.598200 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:18:08.612287 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:18:08.612408 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:18:08.614912 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:18:08.615896 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:18:08.616055 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:18:08.617851 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:18:08.630226 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:18:08.633850 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (516) Feb 13 15:18:08.638994 kernel: BTRFS: device fsid dbbe73f5-49db-4e16-b023-d47ce63b488f devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (527) Feb 13 15:18:08.645577 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:18:08.646685 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:18:08.654341 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:18:08.658620 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:18:08.662306 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:18:08.663263 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:18:08.676166 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:18:08.677685 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:18:08.682303 disk-uuid[553]: Primary Header is updated. Feb 13 15:18:08.682303 disk-uuid[553]: Secondary Entries is updated. Feb 13 15:18:08.682303 disk-uuid[553]: Secondary Header is updated. Feb 13 15:18:08.685985 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:18:08.697225 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:18:09.695572 disk-uuid[554]: The operation has completed successfully. Feb 13 15:18:09.696672 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:18:09.721222 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:18:09.721326 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:18:09.747168 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:18:09.750225 sh[575]: Success Feb 13 15:18:09.763250 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:18:09.787098 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:18:09.813425 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:18:09.815486 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:18:09.826306 kernel: BTRFS info (device dm-0): first mount of filesystem dbbe73f5-49db-4e16-b023-d47ce63b488f Feb 13 15:18:09.826353 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:18:09.827972 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:18:09.828014 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:18:09.828025 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:18:09.831804 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:18:09.832991 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:18:09.846096 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:18:09.847496 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:18:09.855234 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:18:09.855282 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:18:09.855292 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:18:09.857970 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:18:09.864566 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:18:09.865958 kernel: BTRFS info (device vda6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:18:09.875611 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:18:09.883189 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:18:09.945800 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:18:09.963140 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:18:09.982937 ignition[674]: Ignition 2.20.0 Feb 13 15:18:09.982957 ignition[674]: Stage: fetch-offline Feb 13 15:18:09.983940 systemd-networkd[768]: lo: Link UP Feb 13 15:18:09.982989 ignition[674]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:18:09.983967 systemd-networkd[768]: lo: Gained carrier Feb 13 15:18:09.982997 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:18:09.985090 systemd-networkd[768]: Enumeration completed Feb 13 15:18:09.983151 ignition[674]: parsed url from cmdline: "" Feb 13 15:18:09.985316 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:18:09.983154 ignition[674]: no config URL provided Feb 13 15:18:09.985826 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:18:09.983159 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:18:09.985829 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:18:09.983167 ignition[674]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:18:09.987304 systemd-networkd[768]: eth0: Link UP Feb 13 15:18:09.983193 ignition[674]: op(1): [started] loading QEMU firmware config module Feb 13 15:18:09.987307 systemd-networkd[768]: eth0: Gained carrier Feb 13 15:18:09.983197 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:18:09.987313 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:18:09.997260 ignition[674]: op(1): [finished] loading QEMU firmware config module Feb 13 15:18:09.989040 systemd[1]: Reached target network.target - Network. Feb 13 15:18:10.000985 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:18:10.023632 ignition[674]: parsing config with SHA512: 53d0608d06c74602e22a0d1b38a32b8a74dd37cd6e931b54592d4ec7be2ada8898dd024093fa2ff24e7f22d13dc2433fd632f075f9e88c269d2d859f2c5c3df0 Feb 13 15:18:10.028201 unknown[674]: fetched base config from "system" Feb 13 15:18:10.028212 unknown[674]: fetched user config from "qemu" Feb 13 15:18:10.031217 ignition[674]: fetch-offline: fetch-offline passed Feb 13 15:18:10.031352 ignition[674]: Ignition finished successfully Feb 13 15:18:10.033375 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:18:10.034849 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:18:10.045147 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:18:10.055480 ignition[774]: Ignition 2.20.0 Feb 13 15:18:10.055490 ignition[774]: Stage: kargs Feb 13 15:18:10.055646 ignition[774]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:18:10.055656 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:18:10.056560 ignition[774]: kargs: kargs passed Feb 13 15:18:10.056600 ignition[774]: Ignition finished successfully Feb 13 15:18:10.058697 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:18:10.068117 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:18:10.079043 ignition[783]: Ignition 2.20.0 Feb 13 15:18:10.079053 ignition[783]: Stage: disks Feb 13 15:18:10.079208 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:18:10.079217 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:18:10.080032 ignition[783]: disks: disks passed Feb 13 15:18:10.081480 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:18:10.080078 ignition[783]: Ignition finished successfully Feb 13 15:18:10.084577 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:18:10.085733 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:18:10.087661 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:18:10.089231 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:18:10.090601 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:18:10.103091 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:18:10.112730 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:18:10.116216 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:18:10.127069 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:18:10.165979 kernel: EXT4-fs (vda9): mounted filesystem 469d244b-00c1-45f4-bce0-c1d88e98a895 r/w with ordered data mode. Quota mode: none. Feb 13 15:18:10.165994 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:18:10.167013 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:18:10.178040 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:18:10.179555 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:18:10.181533 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:18:10.181580 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:18:10.181601 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:18:10.186598 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:18:10.188067 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (801) Feb 13 15:18:10.188294 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:18:10.191633 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:18:10.191651 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:18:10.191660 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:18:10.192961 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:18:10.194624 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:18:10.227498 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:18:10.230700 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:18:10.233667 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:18:10.236473 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:18:10.303628 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:18:10.317074 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:18:10.318456 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:18:10.322963 kernel: BTRFS info (device vda6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:18:10.339222 ignition[915]: INFO : Ignition 2.20.0 Feb 13 15:18:10.339222 ignition[915]: INFO : Stage: mount Feb 13 15:18:10.340484 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:18:10.340484 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:18:10.340484 ignition[915]: INFO : mount: mount passed Feb 13 15:18:10.340484 ignition[915]: INFO : Ignition finished successfully Feb 13 15:18:10.341025 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:18:10.343194 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:18:10.348036 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:18:10.825551 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:18:10.834220 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:18:10.840447 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929) Feb 13 15:18:10.840473 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:18:10.840490 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:18:10.841141 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:18:10.843975 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:18:10.844491 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:18:10.862167 ignition[946]: INFO : Ignition 2.20.0 Feb 13 15:18:10.862167 ignition[946]: INFO : Stage: files Feb 13 15:18:10.863401 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:18:10.863401 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:18:10.863401 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:18:10.865897 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:18:10.865897 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:18:10.865897 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:18:10.865897 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:18:10.869801 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:18:10.869801 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:18:10.869801 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:18:10.866167 unknown[946]: wrote ssh authorized keys file for user: core Feb 13 15:18:10.954471 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:18:11.068744 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:18:11.068744 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:18:11.071776 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:18:11.071776 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:18:11.071776 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:18:11.071776 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:18:11.071776 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:18:11.071776 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:18:11.071776 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:18:11.071776 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:18:11.071776 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:18:11.071776 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:18:11.071776 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:18:11.071776 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:18:11.071776 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 15:18:11.387860 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:18:11.601226 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:18:11.601226 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:18:11.604920 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:18:11.604920 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:18:11.604920 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:18:11.604920 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 15:18:11.604920 ignition[946]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:18:11.604920 ignition[946]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:18:11.604920 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 15:18:11.604920 ignition[946]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:18:11.624707 ignition[946]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:18:11.628355 ignition[946]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:18:11.629862 ignition[946]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:18:11.629862 ignition[946]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:18:11.629862 ignition[946]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:18:11.629862 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:18:11.629862 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:18:11.629862 ignition[946]: INFO : files: files passed Feb 13 15:18:11.629862 ignition[946]: INFO : Ignition finished successfully Feb 13 15:18:11.631703 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:18:11.645202 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:18:11.646912 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:18:11.649492 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:18:11.649577 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:18:11.655136 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:18:11.657694 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:18:11.657694 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:18:11.660877 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:18:11.660442 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:18:11.662037 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:18:11.679140 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:18:11.698147 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:18:11.699033 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:18:11.700155 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:18:11.701701 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:18:11.703205 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:18:11.703966 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:18:11.719444 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:18:11.731112 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:18:11.739121 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:18:11.739935 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:18:11.741651 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:18:11.743253 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:18:11.743314 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:18:11.745702 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:18:11.747507 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:18:11.748920 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:18:11.750422 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:18:11.752043 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:18:11.753689 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:18:11.755287 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:18:11.756893 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:18:11.758728 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:18:11.760289 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:18:11.761623 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:18:11.761691 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:18:11.763830 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:18:11.765608 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:18:11.767283 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:18:11.772008 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:18:11.772883 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:18:11.772972 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:18:11.775663 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:18:11.775703 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:18:11.777361 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:18:11.778740 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:18:11.783006 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:18:11.783942 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:18:11.785805 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:18:11.787229 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:18:11.787276 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:18:11.788629 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:18:11.788665 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:18:11.790006 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:18:11.790052 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:18:11.791597 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:18:11.791635 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:18:11.804067 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:18:11.804732 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:18:11.804788 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:18:11.807504 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:18:11.808905 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:18:11.808969 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:18:11.810612 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:18:11.810673 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:18:11.813630 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:18:11.813717 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:18:11.819047 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:18:11.821360 ignition[1000]: INFO : Ignition 2.20.0 Feb 13 15:18:11.821360 ignition[1000]: INFO : Stage: umount Feb 13 15:18:11.822998 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:18:11.822998 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:18:11.822998 ignition[1000]: INFO : umount: umount passed Feb 13 15:18:11.822998 ignition[1000]: INFO : Ignition finished successfully Feb 13 15:18:11.825321 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:18:11.825423 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:18:11.826760 systemd[1]: Stopped target network.target - Network. Feb 13 15:18:11.828669 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:18:11.828738 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:18:11.830269 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:18:11.830313 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:18:11.831659 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:18:11.831697 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:18:11.833005 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:18:11.833041 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:18:11.834733 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:18:11.836150 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:18:11.847004 systemd-networkd[768]: eth0: DHCPv6 lease lost Feb 13 15:18:11.848174 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:18:11.848275 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:18:11.850608 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:18:11.850706 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:18:11.852473 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:18:11.852514 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:18:11.870132 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:18:11.870796 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:18:11.870859 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:18:11.872661 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:18:11.872704 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:18:11.874231 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:18:11.874269 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:18:11.876037 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:18:11.876073 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:18:11.877756 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:18:11.880071 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:18:11.880156 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:18:11.884031 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:18:11.884094 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:18:11.888682 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:18:11.888775 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:18:11.901527 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:18:11.901700 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:18:11.903653 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:18:11.903692 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:18:11.904541 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:18:11.904569 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:18:11.905345 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:18:11.905383 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:18:11.908023 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:18:11.908064 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:18:11.909728 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:18:11.909766 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:18:11.924128 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:18:11.924961 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:18:11.925024 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:18:11.926838 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:18:11.926879 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:18:11.933111 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:18:11.934027 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:18:11.935206 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:18:11.937491 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:18:11.946978 systemd[1]: Switching root. Feb 13 15:18:11.969825 systemd-journald[240]: Journal stopped Feb 13 15:18:12.650682 systemd-journald[240]: Received SIGTERM from PID 1 (systemd). Feb 13 15:18:12.650736 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:18:12.650748 kernel: SELinux: policy capability open_perms=1 Feb 13 15:18:12.650759 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:18:12.650768 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:18:12.650777 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:18:12.650792 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:18:12.650802 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:18:12.650811 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:18:12.650820 kernel: audit: type=1403 audit(1739459892.104:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:18:12.650830 systemd[1]: Successfully loaded SELinux policy in 32.437ms. Feb 13 15:18:12.650848 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.486ms. Feb 13 15:18:12.650859 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:18:12.650869 systemd[1]: Detected virtualization kvm. Feb 13 15:18:12.650879 systemd[1]: Detected architecture arm64. Feb 13 15:18:12.650897 systemd[1]: Detected first boot. Feb 13 15:18:12.650909 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:18:12.650920 zram_generator::config[1044]: No configuration found. Feb 13 15:18:12.650930 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:18:12.650940 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:18:12.650961 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:18:12.650972 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:18:12.650983 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:18:12.650995 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:18:12.651007 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:18:12.651017 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:18:12.651027 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:18:12.651038 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:18:12.651048 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:18:12.651058 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:18:12.651068 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:18:12.651080 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:18:12.651090 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:18:12.651106 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:18:12.651116 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:18:12.651126 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:18:12.651136 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:18:12.651147 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:18:12.651157 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:18:12.651166 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:18:12.651178 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:18:12.651189 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:18:12.651199 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:18:12.651209 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:18:12.651219 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:18:12.651230 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:18:12.651240 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:18:12.651250 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:18:12.651262 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:18:12.651273 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:18:12.651283 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:18:12.651293 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:18:12.651303 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:18:12.651313 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:18:12.651323 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:18:12.651333 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:18:12.651343 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:18:12.651355 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:18:12.651366 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:18:12.651376 systemd[1]: Reached target machines.target - Containers. Feb 13 15:18:12.651386 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:18:12.651397 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:18:12.651407 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:18:12.651416 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:18:12.651428 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:18:12.651437 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:18:12.651450 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:18:12.651460 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:18:12.651470 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:18:12.651480 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:18:12.651490 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:18:12.651500 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:18:12.651510 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:18:12.651520 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:18:12.651531 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:18:12.651542 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:18:12.651552 kernel: fuse: init (API version 7.39) Feb 13 15:18:12.651561 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:18:12.651571 kernel: loop: module loaded Feb 13 15:18:12.651598 systemd-journald[1104]: Collecting audit messages is disabled. Feb 13 15:18:12.651619 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:18:12.651630 systemd-journald[1104]: Journal started Feb 13 15:18:12.651657 systemd-journald[1104]: Runtime Journal (/run/log/journal/6e9f05fc908a4c858f6cb719b780f2f8) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:18:12.463462 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:18:12.476849 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:18:12.477217 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:18:12.657012 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:18:12.658419 kernel: ACPI: bus type drm_connector registered Feb 13 15:18:12.660113 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:18:12.660154 systemd[1]: Stopped verity-setup.service. Feb 13 15:18:12.662990 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:18:12.664303 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:18:12.665305 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:18:12.666255 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:18:12.667083 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:18:12.667961 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:18:12.668919 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:18:12.669900 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:18:12.671141 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:18:12.671301 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:18:12.672423 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:18:12.672579 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:18:12.673820 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:18:12.673982 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:18:12.675626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:18:12.675776 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:18:12.676935 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:18:12.677085 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:18:12.679193 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:18:12.679321 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:18:12.680361 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:18:12.681416 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:18:12.682570 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:18:12.693637 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:18:12.700053 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:18:12.701820 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:18:12.702746 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:18:12.702776 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:18:12.704466 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:18:12.706402 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:18:12.708158 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:18:12.709007 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:18:12.711369 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:18:12.714050 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:18:12.716693 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:18:12.717926 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:18:12.719831 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:18:12.721238 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:18:12.724652 systemd-journald[1104]: Time spent on flushing to /var/log/journal/6e9f05fc908a4c858f6cb719b780f2f8 is 13.334ms for 853 entries. Feb 13 15:18:12.724652 systemd-journald[1104]: System Journal (/var/log/journal/6e9f05fc908a4c858f6cb719b780f2f8) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:18:12.852444 systemd-journald[1104]: Received client request to flush runtime journal. Feb 13 15:18:12.852507 kernel: loop0: detected capacity change from 0 to 113552 Feb 13 15:18:12.852526 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:18:12.726128 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:18:12.728797 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:18:12.730749 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:18:12.733206 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:18:12.734464 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:18:12.747442 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:18:12.750433 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:18:12.756179 udevadm[1153]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:18:12.833406 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:18:12.835622 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:18:12.847159 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:18:12.852977 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:18:12.854372 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:18:12.864971 kernel: loop1: detected capacity change from 0 to 116784 Feb 13 15:18:12.867284 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:18:12.871252 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:18:12.873637 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:18:12.895045 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:18:12.903113 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:18:12.906038 kernel: loop2: detected capacity change from 0 to 189592 Feb 13 15:18:12.926325 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 15:18:12.926347 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 15:18:12.931245 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:18:12.941040 kernel: loop3: detected capacity change from 0 to 113552 Feb 13 15:18:12.947031 kernel: loop4: detected capacity change from 0 to 116784 Feb 13 15:18:12.952963 kernel: loop5: detected capacity change from 0 to 189592 Feb 13 15:18:12.957491 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:18:12.958368 (sd-merge)[1180]: Merged extensions into '/usr'. Feb 13 15:18:12.961961 systemd[1]: Reloading requested from client PID 1140 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:18:12.961973 systemd[1]: Reloading... Feb 13 15:18:13.024013 zram_generator::config[1205]: No configuration found. Feb 13 15:18:13.093426 ldconfig[1135]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:18:13.119940 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:18:13.155803 systemd[1]: Reloading finished in 193 ms. Feb 13 15:18:13.185993 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:18:13.187358 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:18:13.201324 systemd[1]: Starting ensure-sysext.service... Feb 13 15:18:13.203415 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:18:13.218636 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:18:13.218655 systemd[1]: Reloading... Feb 13 15:18:13.225805 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:18:13.226062 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:18:13.226740 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:18:13.227006 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Feb 13 15:18:13.227078 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Feb 13 15:18:13.229799 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:18:13.229815 systemd-tmpfiles[1242]: Skipping /boot Feb 13 15:18:13.238223 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:18:13.238239 systemd-tmpfiles[1242]: Skipping /boot Feb 13 15:18:13.267486 zram_generator::config[1272]: No configuration found. Feb 13 15:18:13.353609 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:18:13.390052 systemd[1]: Reloading finished in 171 ms. Feb 13 15:18:13.405057 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:18:13.418393 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:18:13.426699 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:18:13.429050 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:18:13.431091 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:18:13.436218 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:18:13.441227 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:18:13.443314 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:18:13.446363 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:18:13.448411 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:18:13.453463 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:18:13.457639 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:18:13.459545 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:18:13.462169 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:18:13.463696 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:18:13.463863 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:18:13.472034 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:18:13.473775 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:18:13.475050 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:18:13.476988 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:18:13.478816 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:18:13.480610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:18:13.480920 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:18:13.482542 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:18:13.482672 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:18:13.483085 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Feb 13 15:18:13.485795 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:18:13.487173 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:18:13.493356 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:18:13.501317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:18:13.506222 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:18:13.519696 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:18:13.531332 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:18:13.532269 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:18:13.535076 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:18:13.536855 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:18:13.539585 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:18:13.543281 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:18:13.544805 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:18:13.545220 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:18:13.547025 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:18:13.547206 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:18:13.549405 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:18:13.549530 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:18:13.551054 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:18:13.551181 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:18:13.556493 systemd[1]: Finished ensure-sysext.service. Feb 13 15:18:13.560747 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:18:13.581440 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:18:13.592908 augenrules[1383]: No rules Feb 13 15:18:13.595199 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:18:13.596156 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:18:13.596243 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:18:13.600984 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1357) Feb 13 15:18:13.602077 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:18:13.604898 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:18:13.605371 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:18:13.605587 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:18:13.634792 systemd-resolved[1308]: Positive Trust Anchors: Feb 13 15:18:13.634808 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:18:13.634840 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:18:13.648288 systemd-resolved[1308]: Defaulting to hostname 'linux'. Feb 13 15:18:13.653776 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:18:13.654744 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:18:13.661993 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:18:13.673169 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:18:13.685573 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:18:13.693213 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:18:13.695752 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:18:13.697080 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:18:13.698486 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:18:13.700434 systemd-networkd[1382]: lo: Link UP Feb 13 15:18:13.700448 systemd-networkd[1382]: lo: Gained carrier Feb 13 15:18:13.701423 systemd-networkd[1382]: Enumeration completed Feb 13 15:18:13.702363 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:18:13.702374 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:18:13.703384 systemd-networkd[1382]: eth0: Link UP Feb 13 15:18:13.703393 systemd-networkd[1382]: eth0: Gained carrier Feb 13 15:18:13.703407 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:18:13.708986 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:18:13.710310 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:18:13.711583 systemd[1]: Reached target network.target - Network. Feb 13 15:18:13.711991 systemd-networkd[1382]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:18:13.712578 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Feb 13 15:18:13.713672 systemd-timesyncd[1388]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:18:13.713721 systemd-timesyncd[1388]: Initial clock synchronization to Thu 2025-02-13 15:18:13.679224 UTC. Feb 13 15:18:13.714213 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:18:13.724885 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:18:13.738821 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:18:13.761002 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:18:13.762703 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:18:13.763732 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:18:13.764747 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:18:13.765847 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:18:13.767143 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:18:13.768289 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:18:13.769311 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:18:13.770321 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:18:13.770356 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:18:13.771103 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:18:13.772669 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:18:13.774923 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:18:13.784853 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:18:13.786896 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:18:13.788305 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:18:13.789293 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:18:13.790053 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:18:13.790783 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:18:13.790812 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:18:13.791688 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:18:13.793475 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:18:13.795633 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:18:13.796860 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:18:13.803045 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:18:13.803910 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:18:13.807152 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:18:13.809285 jq[1413]: false Feb 13 15:18:13.810548 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:18:13.812615 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:18:13.817129 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:18:13.820904 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:18:13.822408 extend-filesystems[1414]: Found loop3 Feb 13 15:18:13.822408 extend-filesystems[1414]: Found loop4 Feb 13 15:18:13.822408 extend-filesystems[1414]: Found loop5 Feb 13 15:18:13.822408 extend-filesystems[1414]: Found vda Feb 13 15:18:13.822408 extend-filesystems[1414]: Found vda1 Feb 13 15:18:13.822408 extend-filesystems[1414]: Found vda2 Feb 13 15:18:13.822408 extend-filesystems[1414]: Found vda3 Feb 13 15:18:13.822408 extend-filesystems[1414]: Found usr Feb 13 15:18:13.822408 extend-filesystems[1414]: Found vda4 Feb 13 15:18:13.822408 extend-filesystems[1414]: Found vda6 Feb 13 15:18:13.822408 extend-filesystems[1414]: Found vda7 Feb 13 15:18:13.822408 extend-filesystems[1414]: Found vda9 Feb 13 15:18:13.822408 extend-filesystems[1414]: Checking size of /dev/vda9 Feb 13 15:18:13.822796 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:18:13.824998 dbus-daemon[1412]: [system] SELinux support is enabled Feb 13 15:18:13.823244 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:18:13.831134 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:18:13.833762 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:18:13.835438 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:18:13.838361 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:18:13.842328 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:18:13.842484 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:18:13.842739 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:18:13.842873 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:18:13.847917 jq[1432]: true Feb 13 15:18:13.851408 extend-filesystems[1414]: Resized partition /dev/vda9 Feb 13 15:18:13.856385 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:18:13.856578 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:18:13.860968 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1352) Feb 13 15:18:13.874671 extend-filesystems[1446]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:18:13.886414 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:18:13.886439 tar[1435]: linux-arm64/helm Feb 13 15:18:13.881273 (ntainerd)[1447]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:18:13.886789 jq[1437]: true Feb 13 15:18:13.904257 update_engine[1425]: I20250213 15:18:13.900221 1425 main.cc:92] Flatcar Update Engine starting Feb 13 15:18:13.907010 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:18:13.909088 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:18:13.909123 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:18:13.919043 update_engine[1425]: I20250213 15:18:13.910934 1425 update_check_scheduler.cc:74] Next update check in 10m11s Feb 13 15:18:13.912050 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:18:13.912069 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:18:13.913870 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:18:13.917753 systemd-logind[1422]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:18:13.920010 systemd-logind[1422]: New seat seat0. Feb 13 15:18:13.922141 extend-filesystems[1446]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:18:13.922141 extend-filesystems[1446]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:18:13.922141 extend-filesystems[1446]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:18:13.921173 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:18:13.931411 extend-filesystems[1414]: Resized filesystem in /dev/vda9 Feb 13 15:18:13.922817 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:18:13.923610 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:18:13.926968 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:18:13.970117 bash[1468]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:18:13.973213 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:18:13.975935 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:18:13.984322 locksmithd[1462]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:18:14.173451 containerd[1447]: time="2025-02-13T15:18:14.173313238Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:18:14.200121 containerd[1447]: time="2025-02-13T15:18:14.200068511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:18:14.201528 containerd[1447]: time="2025-02-13T15:18:14.201491941Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:18:14.201528 containerd[1447]: time="2025-02-13T15:18:14.201523787Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:18:14.201603 containerd[1447]: time="2025-02-13T15:18:14.201540049Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:18:14.201827 containerd[1447]: time="2025-02-13T15:18:14.201705991Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:18:14.201827 containerd[1447]: time="2025-02-13T15:18:14.201727727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:18:14.201827 containerd[1447]: time="2025-02-13T15:18:14.201790859Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:18:14.201827 containerd[1447]: time="2025-02-13T15:18:14.201802847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:18:14.202275 containerd[1447]: time="2025-02-13T15:18:14.201981854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:18:14.202275 containerd[1447]: time="2025-02-13T15:18:14.202000994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:18:14.202275 containerd[1447]: time="2025-02-13T15:18:14.202014379Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:18:14.202275 containerd[1447]: time="2025-02-13T15:18:14.202024009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:18:14.202275 containerd[1447]: time="2025-02-13T15:18:14.202095612Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:18:14.202375 containerd[1447]: time="2025-02-13T15:18:14.202290442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:18:14.202413 containerd[1447]: time="2025-02-13T15:18:14.202389456Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:18:14.202413 containerd[1447]: time="2025-02-13T15:18:14.202409834Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:18:14.202505 containerd[1447]: time="2025-02-13T15:18:14.202486671Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:18:14.202548 containerd[1447]: time="2025-02-13T15:18:14.202533301Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:18:14.207869 containerd[1447]: time="2025-02-13T15:18:14.207834482Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:18:14.207927 containerd[1447]: time="2025-02-13T15:18:14.207888104Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:18:14.207927 containerd[1447]: time="2025-02-13T15:18:14.207902848Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:18:14.207927 containerd[1447]: time="2025-02-13T15:18:14.207917193Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:18:14.208010 containerd[1447]: time="2025-02-13T15:18:14.207931617Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:18:14.208774 containerd[1447]: time="2025-02-13T15:18:14.208087729Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:18:14.208774 containerd[1447]: time="2025-02-13T15:18:14.208384730Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:18:14.208774 containerd[1447]: time="2025-02-13T15:18:14.208493493Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:18:14.208774 containerd[1447]: time="2025-02-13T15:18:14.208510035Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:18:14.208774 containerd[1447]: time="2025-02-13T15:18:14.208529534Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:18:14.208774 containerd[1447]: time="2025-02-13T15:18:14.208544798Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:18:14.208774 containerd[1447]: time="2025-02-13T15:18:14.208561420Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:18:14.208774 containerd[1447]: time="2025-02-13T15:18:14.208573887Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:18:14.208774 containerd[1447]: time="2025-02-13T15:18:14.208587991Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:18:14.208774 containerd[1447]: time="2025-02-13T15:18:14.208601817Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:18:14.208774 containerd[1447]: time="2025-02-13T15:18:14.208622674Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:18:14.208774 containerd[1447]: time="2025-02-13T15:18:14.208635061Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:18:14.208774 containerd[1447]: time="2025-02-13T15:18:14.208645649Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:18:14.208774 containerd[1447]: time="2025-02-13T15:18:14.208672261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:18:14.209043 containerd[1447]: time="2025-02-13T15:18:14.208688603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:18:14.209043 containerd[1447]: time="2025-02-13T15:18:14.208700071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:18:14.209043 containerd[1447]: time="2025-02-13T15:18:14.208712338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:18:14.209043 containerd[1447]: time="2025-02-13T15:18:14.208723925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:18:14.209043 containerd[1447]: time="2025-02-13T15:18:14.208736032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:18:14.209043 containerd[1447]: time="2025-02-13T15:18:14.208747540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:18:14.209043 containerd[1447]: time="2025-02-13T15:18:14.208759008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:18:14.209043 containerd[1447]: time="2025-02-13T15:18:14.208772513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:18:14.209043 containerd[1447]: time="2025-02-13T15:18:14.208800603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:18:14.209043 containerd[1447]: time="2025-02-13T15:18:14.208812630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:18:14.209043 containerd[1447]: time="2025-02-13T15:18:14.208825416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:18:14.209043 containerd[1447]: time="2025-02-13T15:18:14.208837044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:18:14.209043 containerd[1447]: time="2025-02-13T15:18:14.208851388Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:18:14.209043 containerd[1447]: time="2025-02-13T15:18:14.208871766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:18:14.209043 containerd[1447]: time="2025-02-13T15:18:14.208885511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:18:14.209297 containerd[1447]: time="2025-02-13T15:18:14.208896100Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:18:14.209297 containerd[1447]: time="2025-02-13T15:18:14.209079423Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:18:14.209297 containerd[1447]: time="2025-02-13T15:18:14.209102318Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:18:14.209297 containerd[1447]: time="2025-02-13T15:18:14.209113027Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:18:14.209297 containerd[1447]: time="2025-02-13T15:18:14.209124455Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:18:14.209297 containerd[1447]: time="2025-02-13T15:18:14.209133005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:18:14.209297 containerd[1447]: time="2025-02-13T15:18:14.209148469Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:18:14.209297 containerd[1447]: time="2025-02-13T15:18:14.209157939Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:18:14.209297 containerd[1447]: time="2025-02-13T15:18:14.209167408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:18:14.209798 containerd[1447]: time="2025-02-13T15:18:14.209508562Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:18:14.209798 containerd[1447]: time="2025-02-13T15:18:14.209565101Z" level=info msg="Connect containerd service" Feb 13 15:18:14.209798 containerd[1447]: time="2025-02-13T15:18:14.209598785Z" level=info msg="using legacy CRI server" Feb 13 15:18:14.209798 containerd[1447]: time="2025-02-13T15:18:14.209609853Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:18:14.209980 containerd[1447]: time="2025-02-13T15:18:14.209852752Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:18:14.210807 containerd[1447]: time="2025-02-13T15:18:14.210635750Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:18:14.210860 containerd[1447]: time="2025-02-13T15:18:14.210826784Z" level=info msg="Start subscribing containerd event" Feb 13 15:18:14.210882 containerd[1447]: time="2025-02-13T15:18:14.210868899Z" level=info msg="Start recovering state" Feb 13 15:18:14.211214 containerd[1447]: time="2025-02-13T15:18:14.210924959Z" level=info msg="Start event monitor" Feb 13 15:18:14.211214 containerd[1447]: time="2025-02-13T15:18:14.210940182Z" level=info msg="Start snapshots syncer" Feb 13 15:18:14.211214 containerd[1447]: time="2025-02-13T15:18:14.211002036Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:18:14.211214 containerd[1447]: time="2025-02-13T15:18:14.211011026Z" level=info msg="Start streaming server" Feb 13 15:18:14.213474 containerd[1447]: time="2025-02-13T15:18:14.213442971Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:18:14.213533 containerd[1447]: time="2025-02-13T15:18:14.213505664Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:18:14.213588 containerd[1447]: time="2025-02-13T15:18:14.213567677Z" level=info msg="containerd successfully booted in 0.042240s" Feb 13 15:18:14.213676 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:18:14.249494 tar[1435]: linux-arm64/LICENSE Feb 13 15:18:14.249599 tar[1435]: linux-arm64/README.md Feb 13 15:18:14.261996 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:18:14.545212 sshd_keygen[1431]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:18:14.564559 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:18:14.581281 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:18:14.587839 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:18:14.588073 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:18:14.590736 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:18:14.605627 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:18:14.608159 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:18:14.610182 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:18:14.611450 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:18:15.087108 systemd-networkd[1382]: eth0: Gained IPv6LL Feb 13 15:18:15.089889 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:18:15.091615 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:18:15.101221 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:18:15.103603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:15.105668 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:18:15.121116 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:18:15.121319 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:18:15.122748 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:18:15.127360 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:18:15.633909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:15.635331 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:18:15.636334 systemd[1]: Startup finished in 563ms (kernel) + 4.385s (initrd) + 3.567s (userspace) = 8.516s. Feb 13 15:18:15.639257 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:18:15.653731 agetty[1502]: failed to open credentials directory Feb 13 15:18:15.653846 agetty[1503]: failed to open credentials directory Feb 13 15:18:16.141498 kubelet[1526]: E0213 15:18:16.141446 1526 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:18:16.143882 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:18:16.144052 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:18:20.078524 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:18:20.079643 systemd[1]: Started sshd@0-10.0.0.24:22-10.0.0.1:34628.service - OpenSSH per-connection server daemon (10.0.0.1:34628). Feb 13 15:18:20.151341 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 34628 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:18:20.156701 sshd-session[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:20.174699 systemd-logind[1422]: New session 1 of user core. Feb 13 15:18:20.175370 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:18:20.184223 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:18:20.201975 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:18:20.218280 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:18:20.221095 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:18:20.290834 systemd[1543]: Queued start job for default target default.target. Feb 13 15:18:20.310006 systemd[1543]: Created slice app.slice - User Application Slice. Feb 13 15:18:20.310053 systemd[1543]: Reached target paths.target - Paths. Feb 13 15:18:20.310066 systemd[1543]: Reached target timers.target - Timers. Feb 13 15:18:20.311342 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:18:20.321419 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:18:20.321485 systemd[1543]: Reached target sockets.target - Sockets. Feb 13 15:18:20.321498 systemd[1543]: Reached target basic.target - Basic System. Feb 13 15:18:20.321544 systemd[1543]: Reached target default.target - Main User Target. Feb 13 15:18:20.321578 systemd[1543]: Startup finished in 94ms. Feb 13 15:18:20.321804 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:18:20.323121 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:18:20.386462 systemd[1]: Started sshd@1-10.0.0.24:22-10.0.0.1:34632.service - OpenSSH per-connection server daemon (10.0.0.1:34632). Feb 13 15:18:20.432351 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 34632 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:18:20.433698 sshd-session[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:20.438922 systemd-logind[1422]: New session 2 of user core. Feb 13 15:18:20.453149 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:18:20.505718 sshd[1556]: Connection closed by 10.0.0.1 port 34632 Feb 13 15:18:20.506086 sshd-session[1554]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:20.514339 systemd[1]: sshd@1-10.0.0.24:22-10.0.0.1:34632.service: Deactivated successfully. Feb 13 15:18:20.515879 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:18:20.519065 systemd-logind[1422]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:18:20.528291 systemd[1]: Started sshd@2-10.0.0.24:22-10.0.0.1:34644.service - OpenSSH per-connection server daemon (10.0.0.1:34644). Feb 13 15:18:20.529260 systemd-logind[1422]: Removed session 2. Feb 13 15:18:20.565876 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 34644 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:18:20.567066 sshd-session[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:20.570999 systemd-logind[1422]: New session 3 of user core. Feb 13 15:18:20.582153 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:18:20.629856 sshd[1563]: Connection closed by 10.0.0.1 port 34644 Feb 13 15:18:20.630202 sshd-session[1561]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:20.648431 systemd[1]: sshd@2-10.0.0.24:22-10.0.0.1:34644.service: Deactivated successfully. Feb 13 15:18:20.649835 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:18:20.652136 systemd-logind[1422]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:18:20.653251 systemd[1]: Started sshd@3-10.0.0.24:22-10.0.0.1:34660.service - OpenSSH per-connection server daemon (10.0.0.1:34660). Feb 13 15:18:20.655003 systemd-logind[1422]: Removed session 3. Feb 13 15:18:20.694921 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 34660 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:18:20.696193 sshd-session[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:20.700978 systemd-logind[1422]: New session 4 of user core. Feb 13 15:18:20.706178 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:18:20.758908 sshd[1570]: Connection closed by 10.0.0.1 port 34660 Feb 13 15:18:20.759254 sshd-session[1568]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:20.774231 systemd[1]: sshd@3-10.0.0.24:22-10.0.0.1:34660.service: Deactivated successfully. Feb 13 15:18:20.775569 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:18:20.776663 systemd-logind[1422]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:18:20.777711 systemd[1]: Started sshd@4-10.0.0.24:22-10.0.0.1:34676.service - OpenSSH per-connection server daemon (10.0.0.1:34676). Feb 13 15:18:20.778793 systemd-logind[1422]: Removed session 4. Feb 13 15:18:20.818266 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 34676 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:18:20.819487 sshd-session[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:20.823540 systemd-logind[1422]: New session 5 of user core. Feb 13 15:18:20.834125 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:18:20.892926 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:18:20.893238 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:18:21.237209 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:18:21.237287 (dockerd)[1598]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:18:21.509863 dockerd[1598]: time="2025-02-13T15:18:21.509734726Z" level=info msg="Starting up" Feb 13 15:18:21.667053 dockerd[1598]: time="2025-02-13T15:18:21.667004759Z" level=info msg="Loading containers: start." Feb 13 15:18:21.816997 kernel: Initializing XFRM netlink socket Feb 13 15:18:21.878781 systemd-networkd[1382]: docker0: Link UP Feb 13 15:18:21.907324 dockerd[1598]: time="2025-02-13T15:18:21.907268996Z" level=info msg="Loading containers: done." Feb 13 15:18:21.919518 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3727818237-merged.mount: Deactivated successfully. Feb 13 15:18:21.923914 dockerd[1598]: time="2025-02-13T15:18:21.923855560Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:18:21.924039 dockerd[1598]: time="2025-02-13T15:18:21.923964426Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:18:21.924166 dockerd[1598]: time="2025-02-13T15:18:21.924128765Z" level=info msg="Daemon has completed initialization" Feb 13 15:18:21.957128 dockerd[1598]: time="2025-02-13T15:18:21.957063976Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:18:21.957257 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:18:22.516959 containerd[1447]: time="2025-02-13T15:18:22.516910950Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 15:18:23.262870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1903274632.mount: Deactivated successfully. Feb 13 15:18:24.170967 containerd[1447]: time="2025-02-13T15:18:24.170700740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:24.171966 containerd[1447]: time="2025-02-13T15:18:24.171901561Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620377" Feb 13 15:18:24.172737 containerd[1447]: time="2025-02-13T15:18:24.172678673Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:24.176611 containerd[1447]: time="2025-02-13T15:18:24.176574025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:24.178865 containerd[1447]: time="2025-02-13T15:18:24.178248474Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 1.661300196s" Feb 13 15:18:24.178865 containerd[1447]: time="2025-02-13T15:18:24.178294119Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 15:18:24.179634 containerd[1447]: time="2025-02-13T15:18:24.179468760Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 15:18:25.312962 containerd[1447]: time="2025-02-13T15:18:25.312902969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:25.314309 containerd[1447]: time="2025-02-13T15:18:25.314100301Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471775" Feb 13 15:18:25.315025 containerd[1447]: time="2025-02-13T15:18:25.314971481Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:25.317769 containerd[1447]: time="2025-02-13T15:18:25.317720717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:25.318791 containerd[1447]: time="2025-02-13T15:18:25.318755413Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 1.13925208s" Feb 13 15:18:25.318791 containerd[1447]: time="2025-02-13T15:18:25.318787429Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 15:18:25.319348 containerd[1447]: time="2025-02-13T15:18:25.319316827Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 15:18:26.362371 containerd[1447]: time="2025-02-13T15:18:26.362313789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:26.363157 containerd[1447]: time="2025-02-13T15:18:26.363110125Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024542" Feb 13 15:18:26.363821 containerd[1447]: time="2025-02-13T15:18:26.363786948Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:26.367649 containerd[1447]: time="2025-02-13T15:18:26.367605584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:26.368434 containerd[1447]: time="2025-02-13T15:18:26.368304271Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.048962662s" Feb 13 15:18:26.368434 containerd[1447]: time="2025-02-13T15:18:26.368334568Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 15:18:26.368925 containerd[1447]: time="2025-02-13T15:18:26.368746626Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 15:18:26.394306 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:18:26.403133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:26.491095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:26.495294 (kubelet)[1864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:18:26.532732 kubelet[1864]: E0213 15:18:26.532666 1864 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:18:26.535662 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:18:26.535810 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:18:27.396868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount380958497.mount: Deactivated successfully. Feb 13 15:18:27.887600 containerd[1447]: time="2025-02-13T15:18:27.887411573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:27.900309 containerd[1447]: time="2025-02-13T15:18:27.900246403Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769258" Feb 13 15:18:27.913469 containerd[1447]: time="2025-02-13T15:18:27.913422271Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:27.924461 containerd[1447]: time="2025-02-13T15:18:27.924416330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:27.925044 containerd[1447]: time="2025-02-13T15:18:27.925005471Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.556228147s" Feb 13 15:18:27.925089 containerd[1447]: time="2025-02-13T15:18:27.925043324Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 15:18:27.925651 containerd[1447]: time="2025-02-13T15:18:27.925612799Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:18:28.585123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2947306753.mount: Deactivated successfully. Feb 13 15:18:29.101238 containerd[1447]: time="2025-02-13T15:18:29.101167953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:29.101932 containerd[1447]: time="2025-02-13T15:18:29.101864208Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 15:18:29.102546 containerd[1447]: time="2025-02-13T15:18:29.102513935Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:29.105744 containerd[1447]: time="2025-02-13T15:18:29.105687456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:29.107038 containerd[1447]: time="2025-02-13T15:18:29.107008494Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.181362079s" Feb 13 15:18:29.107108 containerd[1447]: time="2025-02-13T15:18:29.107042991Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:18:29.107547 containerd[1447]: time="2025-02-13T15:18:29.107491252Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 15:18:29.590976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3562439310.mount: Deactivated successfully. Feb 13 15:18:29.596794 containerd[1447]: time="2025-02-13T15:18:29.596748294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:29.599162 containerd[1447]: time="2025-02-13T15:18:29.599079778Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 15:18:29.599968 containerd[1447]: time="2025-02-13T15:18:29.599921376Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:29.602336 containerd[1447]: time="2025-02-13T15:18:29.602301027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:29.603764 containerd[1447]: time="2025-02-13T15:18:29.603729274Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 495.439875ms" Feb 13 15:18:29.603764 containerd[1447]: time="2025-02-13T15:18:29.603759134Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 15:18:29.604388 containerd[1447]: time="2025-02-13T15:18:29.604241092Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 15:18:30.250583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1343351663.mount: Deactivated successfully. Feb 13 15:18:31.790744 containerd[1447]: time="2025-02-13T15:18:31.790686710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:31.791230 containerd[1447]: time="2025-02-13T15:18:31.791192833Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Feb 13 15:18:31.795202 containerd[1447]: time="2025-02-13T15:18:31.795158789Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:31.798568 containerd[1447]: time="2025-02-13T15:18:31.798531116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:31.799726 containerd[1447]: time="2025-02-13T15:18:31.799689230Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.195420317s" Feb 13 15:18:31.799760 containerd[1447]: time="2025-02-13T15:18:31.799728126Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 15:18:35.738123 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:35.749185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:35.769204 systemd[1]: Reloading requested from client PID 2013 ('systemctl') (unit session-5.scope)... Feb 13 15:18:35.769222 systemd[1]: Reloading... Feb 13 15:18:35.840989 zram_generator::config[2055]: No configuration found. Feb 13 15:18:35.956100 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:18:36.009325 systemd[1]: Reloading finished in 239 ms. Feb 13 15:18:36.045521 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:18:36.045586 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:18:36.045780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:36.047978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:36.136135 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:36.140631 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:18:36.178089 kubelet[2098]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:18:36.178089 kubelet[2098]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:18:36.178089 kubelet[2098]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:18:36.178428 kubelet[2098]: I0213 15:18:36.178186 2098 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:18:36.490809 kubelet[2098]: I0213 15:18:36.489564 2098 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:18:36.490809 kubelet[2098]: I0213 15:18:36.489594 2098 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:18:36.490809 kubelet[2098]: I0213 15:18:36.489845 2098 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:18:36.543807 kubelet[2098]: E0213 15:18:36.543763 2098 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:18:36.544304 kubelet[2098]: I0213 15:18:36.544256 2098 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:18:36.554132 kubelet[2098]: E0213 15:18:36.554065 2098 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:18:36.554132 kubelet[2098]: I0213 15:18:36.554119 2098 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:18:36.557641 kubelet[2098]: I0213 15:18:36.557613 2098 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:18:36.557930 kubelet[2098]: I0213 15:18:36.557901 2098 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:18:36.558105 kubelet[2098]: I0213 15:18:36.558054 2098 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:18:36.558275 kubelet[2098]: I0213 15:18:36.558090 2098 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:18:36.558355 kubelet[2098]: I0213 15:18:36.558341 2098 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:18:36.558355 kubelet[2098]: I0213 15:18:36.558352 2098 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:18:36.558553 kubelet[2098]: I0213 15:18:36.558524 2098 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:18:36.562249 kubelet[2098]: I0213 15:18:36.561992 2098 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:18:36.562249 kubelet[2098]: I0213 15:18:36.562025 2098 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:18:36.562249 kubelet[2098]: I0213 15:18:36.562124 2098 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:18:36.562249 kubelet[2098]: I0213 15:18:36.562136 2098 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:18:36.564242 kubelet[2098]: I0213 15:18:36.564196 2098 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:18:36.569359 kubelet[2098]: I0213 15:18:36.569319 2098 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:18:36.569659 kubelet[2098]: W0213 15:18:36.569302 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 13 15:18:36.569659 kubelet[2098]: E0213 15:18:36.569631 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:18:36.571347 kubelet[2098]: W0213 15:18:36.569809 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 13 15:18:36.571347 kubelet[2098]: E0213 15:18:36.569856 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:18:36.571347 kubelet[2098]: W0213 15:18:36.570015 2098 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:18:36.571347 kubelet[2098]: I0213 15:18:36.570739 2098 server.go:1269] "Started kubelet" Feb 13 15:18:36.572171 kubelet[2098]: I0213 15:18:36.572136 2098 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:18:36.573166 kubelet[2098]: I0213 15:18:36.573076 2098 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:18:36.573705 kubelet[2098]: I0213 15:18:36.573683 2098 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:18:36.575383 kubelet[2098]: I0213 15:18:36.575350 2098 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:18:36.576468 kubelet[2098]: I0213 15:18:36.576359 2098 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:18:36.577195 kubelet[2098]: I0213 15:18:36.577173 2098 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:18:36.577683 kubelet[2098]: E0213 15:18:36.577620 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:18:36.577847 kubelet[2098]: I0213 15:18:36.577827 2098 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:18:36.578053 kubelet[2098]: I0213 15:18:36.578029 2098 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:18:36.578120 kubelet[2098]: I0213 15:18:36.578101 2098 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:18:36.578490 kubelet[2098]: W0213 15:18:36.578441 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 13 15:18:36.578540 kubelet[2098]: E0213 15:18:36.578495 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:18:36.578666 kubelet[2098]: I0213 15:18:36.578648 2098 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:18:36.578888 kubelet[2098]: I0213 15:18:36.578726 2098 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:18:36.579642 kubelet[2098]: E0213 15:18:36.579602 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="200ms" Feb 13 15:18:36.579642 kubelet[2098]: I0213 15:18:36.579674 2098 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:18:36.579941 kubelet[2098]: E0213 15:18:36.579873 2098 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:18:36.582010 kubelet[2098]: E0213 15:18:36.578030 2098 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.24:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.24:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cd93fc3a6828 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:18:36.5707162 +0000 UTC m=+0.427104058,LastTimestamp:2025-02-13 15:18:36.5707162 +0000 UTC m=+0.427104058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:18:36.593830 kubelet[2098]: I0213 15:18:36.593784 2098 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:18:36.594095 kubelet[2098]: I0213 15:18:36.593837 2098 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:18:36.594095 kubelet[2098]: I0213 15:18:36.594094 2098 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:18:36.594185 kubelet[2098]: I0213 15:18:36.594115 2098 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:18:36.595167 kubelet[2098]: I0213 15:18:36.595146 2098 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:18:36.595239 kubelet[2098]: I0213 15:18:36.595229 2098 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:18:36.595351 kubelet[2098]: I0213 15:18:36.595281 2098 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:18:36.595414 kubelet[2098]: E0213 15:18:36.595396 2098 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:18:36.600446 kubelet[2098]: W0213 15:18:36.600403 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 13 15:18:36.600554 kubelet[2098]: E0213 15:18:36.600455 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:18:36.634853 kubelet[2098]: I0213 15:18:36.634795 2098 policy_none.go:49] "None policy: Start" Feb 13 15:18:36.635655 kubelet[2098]: I0213 15:18:36.635637 2098 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:18:36.635721 kubelet[2098]: I0213 15:18:36.635665 2098 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:18:36.642519 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:18:36.656644 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:18:36.659329 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:18:36.674931 kubelet[2098]: I0213 15:18:36.674891 2098 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:18:36.675321 kubelet[2098]: I0213 15:18:36.675292 2098 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:18:36.675365 kubelet[2098]: I0213 15:18:36.675312 2098 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:18:36.675528 kubelet[2098]: I0213 15:18:36.675511 2098 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:18:36.677297 kubelet[2098]: E0213 15:18:36.677264 2098 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:18:36.703478 systemd[1]: Created slice kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice - libcontainer container kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice. Feb 13 15:18:36.720677 systemd[1]: Created slice kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice - libcontainer container kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice. Feb 13 15:18:36.741530 systemd[1]: Created slice kubepods-burstable-pod3e2031a310ccbf6dcf7ae8e87f037387.slice - libcontainer container kubepods-burstable-pod3e2031a310ccbf6dcf7ae8e87f037387.slice. Feb 13 15:18:36.776777 kubelet[2098]: I0213 15:18:36.776751 2098 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:18:36.777474 kubelet[2098]: E0213 15:18:36.777446 2098 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Feb 13 15:18:36.780926 kubelet[2098]: E0213 15:18:36.780892 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="400ms" Feb 13 15:18:36.879511 kubelet[2098]: I0213 15:18:36.879245 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:36.879511 kubelet[2098]: I0213 15:18:36.879285 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:36.879511 kubelet[2098]: I0213 15:18:36.879307 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:36.879511 kubelet[2098]: I0213 15:18:36.879326 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:36.879511 kubelet[2098]: I0213 15:18:36.879344 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e2031a310ccbf6dcf7ae8e87f037387-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e2031a310ccbf6dcf7ae8e87f037387\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:18:36.879737 kubelet[2098]: I0213 15:18:36.879365 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:36.879737 kubelet[2098]: I0213 15:18:36.879379 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:18:36.879737 kubelet[2098]: I0213 15:18:36.879392 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e2031a310ccbf6dcf7ae8e87f037387-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e2031a310ccbf6dcf7ae8e87f037387\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:18:36.879737 kubelet[2098]: I0213 15:18:36.879410 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e2031a310ccbf6dcf7ae8e87f037387-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3e2031a310ccbf6dcf7ae8e87f037387\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:18:36.980428 kubelet[2098]: I0213 15:18:36.980383 2098 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:18:36.981122 kubelet[2098]: E0213 15:18:36.981074 2098 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Feb 13 15:18:37.019694 kubelet[2098]: E0213 15:18:37.019554 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:37.020315 containerd[1447]: time="2025-02-13T15:18:37.020263566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:37.039528 kubelet[2098]: E0213 15:18:37.039447 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:37.040236 containerd[1447]: time="2025-02-13T15:18:37.040149151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:37.044536 kubelet[2098]: E0213 15:18:37.044458 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:37.044874 containerd[1447]: time="2025-02-13T15:18:37.044831167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3e2031a310ccbf6dcf7ae8e87f037387,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:37.181400 kubelet[2098]: E0213 15:18:37.181343 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="800ms" Feb 13 15:18:37.382556 kubelet[2098]: I0213 15:18:37.382440 2098 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:18:37.383391 kubelet[2098]: E0213 15:18:37.383329 2098 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Feb 13 15:18:37.438369 kubelet[2098]: E0213 15:18:37.438254 2098 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.24:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.24:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cd93fc3a6828 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:18:36.5707162 +0000 UTC m=+0.427104058,LastTimestamp:2025-02-13 15:18:36.5707162 +0000 UTC m=+0.427104058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:18:37.585102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount632478461.mount: Deactivated successfully. Feb 13 15:18:37.595755 containerd[1447]: time="2025-02-13T15:18:37.595697046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:37.597026 containerd[1447]: time="2025-02-13T15:18:37.596960312Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 15:18:37.598622 containerd[1447]: time="2025-02-13T15:18:37.598557565Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:37.603173 containerd[1447]: time="2025-02-13T15:18:37.603121602Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:37.604236 containerd[1447]: time="2025-02-13T15:18:37.604205081Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:37.604689 containerd[1447]: time="2025-02-13T15:18:37.604563735Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:18:37.605301 containerd[1447]: time="2025-02-13T15:18:37.605265812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:37.606621 containerd[1447]: time="2025-02-13T15:18:37.606589206Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 566.358178ms" Feb 13 15:18:37.608128 containerd[1447]: time="2025-02-13T15:18:37.608036817Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:18:37.612178 containerd[1447]: time="2025-02-13T15:18:37.611853841Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.505758ms" Feb 13 15:18:37.614020 containerd[1447]: time="2025-02-13T15:18:37.613331196Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 568.431345ms" Feb 13 15:18:37.655601 kubelet[2098]: W0213 15:18:37.655479 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 13 15:18:37.656135 kubelet[2098]: E0213 15:18:37.655733 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:18:37.796256 containerd[1447]: time="2025-02-13T15:18:37.796140110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:37.796256 containerd[1447]: time="2025-02-13T15:18:37.796216750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:37.796547 containerd[1447]: time="2025-02-13T15:18:37.796233062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:37.796547 containerd[1447]: time="2025-02-13T15:18:37.796311821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:37.798593 containerd[1447]: time="2025-02-13T15:18:37.798485775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:37.798715 containerd[1447]: time="2025-02-13T15:18:37.798570172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:37.798715 containerd[1447]: time="2025-02-13T15:18:37.798592160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:37.798715 containerd[1447]: time="2025-02-13T15:18:37.798667002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:37.799586 containerd[1447]: time="2025-02-13T15:18:37.799411736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:37.800206 containerd[1447]: time="2025-02-13T15:18:37.800169943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:37.800695 containerd[1447]: time="2025-02-13T15:18:37.800537033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:37.800695 containerd[1447]: time="2025-02-13T15:18:37.800632584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:37.809823 kubelet[2098]: W0213 15:18:37.809772 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 13 15:18:37.809823 kubelet[2098]: E0213 15:18:37.809825 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:18:37.818156 systemd[1]: Started cri-containerd-e4da9c12b05e13290b887c34f88e515fcdd9dcc463724d7bf3d4e7ebc6ac2b64.scope - libcontainer container e4da9c12b05e13290b887c34f88e515fcdd9dcc463724d7bf3d4e7ebc6ac2b64. Feb 13 15:18:37.822337 systemd[1]: Started cri-containerd-bfe598e4a088914b94075fea1733d46dafa87f4f925d05e2ccf6e07c0e6b3427.scope - libcontainer container bfe598e4a088914b94075fea1733d46dafa87f4f925d05e2ccf6e07c0e6b3427. Feb 13 15:18:37.823898 systemd[1]: Started cri-containerd-e0aaf24ebfc473fbdfec35af2a188215eb972a1a7e95c571a452b196a9acb90c.scope - libcontainer container e0aaf24ebfc473fbdfec35af2a188215eb972a1a7e95c571a452b196a9acb90c. Feb 13 15:18:37.856961 containerd[1447]: time="2025-02-13T15:18:37.856871547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfe598e4a088914b94075fea1733d46dafa87f4f925d05e2ccf6e07c0e6b3427\"" Feb 13 15:18:37.859739 containerd[1447]: time="2025-02-13T15:18:37.859707599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4da9c12b05e13290b887c34f88e515fcdd9dcc463724d7bf3d4e7ebc6ac2b64\"" Feb 13 15:18:37.860340 kubelet[2098]: E0213 15:18:37.860173 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:37.860919 kubelet[2098]: E0213 15:18:37.860896 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:37.865172 containerd[1447]: time="2025-02-13T15:18:37.865144704Z" level=info msg="CreateContainer within sandbox \"e4da9c12b05e13290b887c34f88e515fcdd9dcc463724d7bf3d4e7ebc6ac2b64\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:18:37.865307 containerd[1447]: time="2025-02-13T15:18:37.865290109Z" level=info msg="CreateContainer within sandbox \"bfe598e4a088914b94075fea1733d46dafa87f4f925d05e2ccf6e07c0e6b3427\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:18:37.866391 containerd[1447]: time="2025-02-13T15:18:37.866243735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3e2031a310ccbf6dcf7ae8e87f037387,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0aaf24ebfc473fbdfec35af2a188215eb972a1a7e95c571a452b196a9acb90c\"" Feb 13 15:18:37.867482 kubelet[2098]: E0213 15:18:37.867449 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:37.869663 containerd[1447]: time="2025-02-13T15:18:37.869629342Z" level=info msg="CreateContainer within sandbox \"e0aaf24ebfc473fbdfec35af2a188215eb972a1a7e95c571a452b196a9acb90c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:18:37.981807 kubelet[2098]: E0213 15:18:37.981753 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="1.6s" Feb 13 15:18:38.006429 kubelet[2098]: W0213 15:18:38.006324 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 13 15:18:38.006429 kubelet[2098]: E0213 15:18:38.006392 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:18:38.168020 kubelet[2098]: W0213 15:18:38.167939 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Feb 13 15:18:38.168020 kubelet[2098]: E0213 15:18:38.168024 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:18:38.184440 kubelet[2098]: I0213 15:18:38.184408 2098 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:18:38.184875 kubelet[2098]: E0213 15:18:38.184849 2098 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Feb 13 15:18:38.305583 containerd[1447]: time="2025-02-13T15:18:38.305459005Z" level=info msg="CreateContainer within sandbox \"e4da9c12b05e13290b887c34f88e515fcdd9dcc463724d7bf3d4e7ebc6ac2b64\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6e4c5842aab3c83451a192034ba856ff7d62facad737b7d9ade2c0c922a377a3\"" Feb 13 15:18:38.306206 containerd[1447]: time="2025-02-13T15:18:38.306169409Z" level=info msg="StartContainer for \"6e4c5842aab3c83451a192034ba856ff7d62facad737b7d9ade2c0c922a377a3\"" Feb 13 15:18:38.323314 containerd[1447]: time="2025-02-13T15:18:38.323266394Z" level=info msg="CreateContainer within sandbox \"e0aaf24ebfc473fbdfec35af2a188215eb972a1a7e95c571a452b196a9acb90c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a953e2d62ca537f29c9b03f77442ac95cde8869277a2333509299ea920f69e63\"" Feb 13 15:18:38.324436 containerd[1447]: time="2025-02-13T15:18:38.324390270Z" level=info msg="StartContainer for \"a953e2d62ca537f29c9b03f77442ac95cde8869277a2333509299ea920f69e63\"" Feb 13 15:18:38.326225 containerd[1447]: time="2025-02-13T15:18:38.326160422Z" level=info msg="CreateContainer within sandbox \"bfe598e4a088914b94075fea1733d46dafa87f4f925d05e2ccf6e07c0e6b3427\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"21205cfa5fa63a27d05fe0f772568b81d5721cf648ca4fa1b2452c06ba5764de\"" Feb 13 15:18:38.326717 containerd[1447]: time="2025-02-13T15:18:38.326693195Z" level=info msg="StartContainer for \"21205cfa5fa63a27d05fe0f772568b81d5721cf648ca4fa1b2452c06ba5764de\"" Feb 13 15:18:38.334158 systemd[1]: Started cri-containerd-6e4c5842aab3c83451a192034ba856ff7d62facad737b7d9ade2c0c922a377a3.scope - libcontainer container 6e4c5842aab3c83451a192034ba856ff7d62facad737b7d9ade2c0c922a377a3. Feb 13 15:18:38.358134 systemd[1]: Started cri-containerd-21205cfa5fa63a27d05fe0f772568b81d5721cf648ca4fa1b2452c06ba5764de.scope - libcontainer container 21205cfa5fa63a27d05fe0f772568b81d5721cf648ca4fa1b2452c06ba5764de. Feb 13 15:18:38.359337 systemd[1]: Started cri-containerd-a953e2d62ca537f29c9b03f77442ac95cde8869277a2333509299ea920f69e63.scope - libcontainer container a953e2d62ca537f29c9b03f77442ac95cde8869277a2333509299ea920f69e63. Feb 13 15:18:38.400971 containerd[1447]: time="2025-02-13T15:18:38.400894460Z" level=info msg="StartContainer for \"6e4c5842aab3c83451a192034ba856ff7d62facad737b7d9ade2c0c922a377a3\" returns successfully" Feb 13 15:18:38.401116 containerd[1447]: time="2025-02-13T15:18:38.401000407Z" level=info msg="StartContainer for \"a953e2d62ca537f29c9b03f77442ac95cde8869277a2333509299ea920f69e63\" returns successfully" Feb 13 15:18:38.415007 containerd[1447]: time="2025-02-13T15:18:38.414965562Z" level=info msg="StartContainer for \"21205cfa5fa63a27d05fe0f772568b81d5721cf648ca4fa1b2452c06ba5764de\" returns successfully" Feb 13 15:18:38.609463 kubelet[2098]: E0213 15:18:38.609327 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:38.611293 kubelet[2098]: E0213 15:18:38.611266 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:38.613558 kubelet[2098]: E0213 15:18:38.613509 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:39.616008 kubelet[2098]: E0213 15:18:39.615975 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:39.788153 kubelet[2098]: I0213 15:18:39.788115 2098 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:18:40.546087 kubelet[2098]: E0213 15:18:40.546025 2098 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:18:40.625806 kubelet[2098]: I0213 15:18:40.625755 2098 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 15:18:40.625806 kubelet[2098]: E0213 15:18:40.625793 2098 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 15:18:40.635812 kubelet[2098]: E0213 15:18:40.635770 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:18:40.736363 kubelet[2098]: E0213 15:18:40.736320 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:18:40.836897 kubelet[2098]: E0213 15:18:40.836772 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:18:40.937319 kubelet[2098]: E0213 15:18:40.937272 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:18:41.037798 kubelet[2098]: E0213 15:18:41.037752 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:18:41.138845 kubelet[2098]: E0213 15:18:41.138723 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:18:41.239340 kubelet[2098]: E0213 15:18:41.239293 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:18:41.565858 kubelet[2098]: I0213 15:18:41.565782 2098 apiserver.go:52] "Watching apiserver" Feb 13 15:18:41.580500 kubelet[2098]: I0213 15:18:41.578691 2098 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:18:42.589662 systemd[1]: Reloading requested from client PID 2376 ('systemctl') (unit session-5.scope)... Feb 13 15:18:42.589678 systemd[1]: Reloading... Feb 13 15:18:42.649004 zram_generator::config[2416]: No configuration found. Feb 13 15:18:42.728837 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:18:42.788353 kubelet[2098]: E0213 15:18:42.788303 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:42.792847 systemd[1]: Reloading finished in 202 ms. Feb 13 15:18:42.822371 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:42.838038 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:18:42.838298 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:42.848394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:42.935077 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:42.938611 (kubelet)[2457]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:18:42.970123 kubelet[2457]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:18:42.970123 kubelet[2457]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:18:42.970123 kubelet[2457]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:18:42.970425 kubelet[2457]: I0213 15:18:42.970169 2457 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:18:42.978989 kubelet[2457]: I0213 15:18:42.978018 2457 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:18:42.978989 kubelet[2457]: I0213 15:18:42.978045 2457 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:18:42.978989 kubelet[2457]: I0213 15:18:42.978247 2457 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:18:42.979649 kubelet[2457]: I0213 15:18:42.979631 2457 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:18:42.981605 kubelet[2457]: I0213 15:18:42.981525 2457 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:18:42.985688 kubelet[2457]: E0213 15:18:42.985650 2457 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:18:42.985688 kubelet[2457]: I0213 15:18:42.985684 2457 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:18:42.988168 kubelet[2457]: I0213 15:18:42.988110 2457 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:18:42.988263 kubelet[2457]: I0213 15:18:42.988242 2457 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:18:42.988394 kubelet[2457]: I0213 15:18:42.988364 2457 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:18:42.988554 kubelet[2457]: I0213 15:18:42.988394 2457 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:18:42.988638 kubelet[2457]: I0213 15:18:42.988566 2457 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:18:42.988638 kubelet[2457]: I0213 15:18:42.988577 2457 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:18:42.988638 kubelet[2457]: I0213 15:18:42.988617 2457 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:18:42.988786 kubelet[2457]: I0213 15:18:42.988723 2457 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:18:42.988786 kubelet[2457]: I0213 15:18:42.988738 2457 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:18:42.988786 kubelet[2457]: I0213 15:18:42.988763 2457 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:18:42.988786 kubelet[2457]: I0213 15:18:42.988772 2457 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:18:42.989569 kubelet[2457]: I0213 15:18:42.989543 2457 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:18:42.990138 kubelet[2457]: I0213 15:18:42.990120 2457 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:18:42.991745 kubelet[2457]: I0213 15:18:42.990686 2457 server.go:1269] "Started kubelet" Feb 13 15:18:42.991745 kubelet[2457]: I0213 15:18:42.991321 2457 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:18:42.992318 kubelet[2457]: I0213 15:18:42.992267 2457 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:18:42.992624 kubelet[2457]: I0213 15:18:42.992600 2457 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:18:42.993992 kubelet[2457]: I0213 15:18:42.993967 2457 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:18:42.996584 kubelet[2457]: I0213 15:18:42.996562 2457 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:18:42.997453 kubelet[2457]: I0213 15:18:42.997426 2457 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:18:43.003128 kubelet[2457]: I0213 15:18:43.001246 2457 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:18:43.003128 kubelet[2457]: I0213 15:18:43.001372 2457 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:18:43.003128 kubelet[2457]: I0213 15:18:43.001502 2457 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:18:43.003128 kubelet[2457]: E0213 15:18:43.001862 2457 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:18:43.008990 kubelet[2457]: I0213 15:18:43.007733 2457 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:18:43.008990 kubelet[2457]: I0213 15:18:43.007838 2457 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:18:43.014367 kubelet[2457]: E0213 15:18:43.014329 2457 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:18:43.015033 kubelet[2457]: I0213 15:18:43.014999 2457 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:18:43.018050 kubelet[2457]: I0213 15:18:43.017906 2457 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:18:43.020769 kubelet[2457]: I0213 15:18:43.020470 2457 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:18:43.020769 kubelet[2457]: I0213 15:18:43.020494 2457 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:18:43.020769 kubelet[2457]: I0213 15:18:43.020514 2457 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:18:43.020769 kubelet[2457]: E0213 15:18:43.020554 2457 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:18:43.048973 kubelet[2457]: I0213 15:18:43.048936 2457 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:18:43.048973 kubelet[2457]: I0213 15:18:43.048966 2457 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:18:43.048973 kubelet[2457]: I0213 15:18:43.048985 2457 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:18:43.049201 kubelet[2457]: I0213 15:18:43.049137 2457 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:18:43.049201 kubelet[2457]: I0213 15:18:43.049149 2457 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:18:43.049201 kubelet[2457]: I0213 15:18:43.049174 2457 policy_none.go:49] "None policy: Start" Feb 13 15:18:43.049827 kubelet[2457]: I0213 15:18:43.049810 2457 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:18:43.049866 kubelet[2457]: I0213 15:18:43.049834 2457 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:18:43.050138 kubelet[2457]: I0213 15:18:43.050108 2457 state_mem.go:75] "Updated machine memory state" Feb 13 15:18:43.054456 kubelet[2457]: I0213 15:18:43.054421 2457 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:18:43.054629 kubelet[2457]: I0213 15:18:43.054596 2457 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:18:43.054668 kubelet[2457]: I0213 15:18:43.054614 2457 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:18:43.055022 kubelet[2457]: I0213 15:18:43.054784 2457 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:18:43.127040 kubelet[2457]: E0213 15:18:43.126918 2457 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 15:18:43.158269 kubelet[2457]: I0213 15:18:43.158242 2457 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:18:43.165809 kubelet[2457]: I0213 15:18:43.165773 2457 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Feb 13 15:18:43.165916 kubelet[2457]: I0213 15:18:43.165864 2457 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 15:18:43.202556 kubelet[2457]: I0213 15:18:43.202518 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e2031a310ccbf6dcf7ae8e87f037387-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3e2031a310ccbf6dcf7ae8e87f037387\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:18:43.202556 kubelet[2457]: I0213 15:18:43.202554 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:43.202712 kubelet[2457]: I0213 15:18:43.202576 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:18:43.202712 kubelet[2457]: I0213 15:18:43.202592 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e2031a310ccbf6dcf7ae8e87f037387-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e2031a310ccbf6dcf7ae8e87f037387\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:18:43.202712 kubelet[2457]: I0213 15:18:43.202608 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e2031a310ccbf6dcf7ae8e87f037387-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e2031a310ccbf6dcf7ae8e87f037387\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:18:43.202712 kubelet[2457]: I0213 15:18:43.202622 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:43.202712 kubelet[2457]: I0213 15:18:43.202640 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:43.202839 kubelet[2457]: I0213 15:18:43.202677 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:43.202839 kubelet[2457]: I0213 15:18:43.202694 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:43.426988 kubelet[2457]: E0213 15:18:43.426764 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:43.426988 kubelet[2457]: E0213 15:18:43.426824 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:43.427478 kubelet[2457]: E0213 15:18:43.427150 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:43.989593 kubelet[2457]: I0213 15:18:43.989545 2457 apiserver.go:52] "Watching apiserver" Feb 13 15:18:44.001491 kubelet[2457]: I0213 15:18:44.001448 2457 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:18:44.035468 kubelet[2457]: E0213 15:18:44.034678 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:44.035468 kubelet[2457]: E0213 15:18:44.035193 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:44.041712 kubelet[2457]: E0213 15:18:44.041664 2457 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:18:44.041841 kubelet[2457]: E0213 15:18:44.041821 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:44.069593 kubelet[2457]: I0213 15:18:44.068785 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.068750782 podStartE2EDuration="1.068750782s" podCreationTimestamp="2025-02-13 15:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:44.068092015 +0000 UTC m=+1.126253960" watchObservedRunningTime="2025-02-13 15:18:44.068750782 +0000 UTC m=+1.126912727" Feb 13 15:18:44.069593 kubelet[2457]: I0213 15:18:44.069025 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.069017591 podStartE2EDuration="2.069017591s" podCreationTimestamp="2025-02-13 15:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:44.057280696 +0000 UTC m=+1.115442641" watchObservedRunningTime="2025-02-13 15:18:44.069017591 +0000 UTC m=+1.127179536" Feb 13 15:18:44.077655 kubelet[2457]: I0213 15:18:44.077586 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.077570606 podStartE2EDuration="1.077570606s" podCreationTimestamp="2025-02-13 15:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:44.077530982 +0000 UTC m=+1.135692927" watchObservedRunningTime="2025-02-13 15:18:44.077570606 +0000 UTC m=+1.135732551" Feb 13 15:18:44.449020 sudo[1578]: pam_unix(sudo:session): session closed for user root Feb 13 15:18:44.450454 sshd[1577]: Connection closed by 10.0.0.1 port 34676 Feb 13 15:18:44.450875 sshd-session[1575]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:44.454584 systemd[1]: sshd@4-10.0.0.24:22-10.0.0.1:34676.service: Deactivated successfully. Feb 13 15:18:44.456565 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:18:44.457322 systemd[1]: session-5.scope: Consumed 5.265s CPU time, 157.6M memory peak, 0B memory swap peak. Feb 13 15:18:44.457893 systemd-logind[1422]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:18:44.458961 systemd-logind[1422]: Removed session 5. Feb 13 15:18:45.036152 kubelet[2457]: E0213 15:18:45.036111 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:46.037345 kubelet[2457]: E0213 15:18:46.037308 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:48.669930 kubelet[2457]: E0213 15:18:48.669267 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:49.042615 kubelet[2457]: E0213 15:18:49.042298 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:50.020054 kubelet[2457]: I0213 15:18:50.019938 2457 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:18:50.020415 containerd[1447]: time="2025-02-13T15:18:50.020375096Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:18:50.020623 kubelet[2457]: I0213 15:18:50.020549 2457 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:18:50.978645 systemd[1]: Created slice kubepods-besteffort-pod1d123518_daee_4a30_88eb_b655017c65fa.slice - libcontainer container kubepods-besteffort-pod1d123518_daee_4a30_88eb_b655017c65fa.slice. Feb 13 15:18:50.996741 systemd[1]: Created slice kubepods-burstable-pod61d5431d_579e_4c05_a2bd_0b0170963616.slice - libcontainer container kubepods-burstable-pod61d5431d_579e_4c05_a2bd_0b0170963616.slice. Feb 13 15:18:51.148363 kubelet[2457]: I0213 15:18:51.148311 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d123518-daee-4a30-88eb-b655017c65fa-lib-modules\") pod \"kube-proxy-lzdzh\" (UID: \"1d123518-daee-4a30-88eb-b655017c65fa\") " pod="kube-system/kube-proxy-lzdzh" Feb 13 15:18:51.148363 kubelet[2457]: I0213 15:18:51.148354 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/61d5431d-579e-4c05-a2bd-0b0170963616-flannel-cfg\") pod \"kube-flannel-ds-npdzh\" (UID: \"61d5431d-579e-4c05-a2bd-0b0170963616\") " pod="kube-flannel/kube-flannel-ds-npdzh" Feb 13 15:18:51.148727 kubelet[2457]: I0213 15:18:51.148375 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61d5431d-579e-4c05-a2bd-0b0170963616-xtables-lock\") pod \"kube-flannel-ds-npdzh\" (UID: \"61d5431d-579e-4c05-a2bd-0b0170963616\") " pod="kube-flannel/kube-flannel-ds-npdzh" Feb 13 15:18:51.148727 kubelet[2457]: I0213 15:18:51.148393 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d123518-daee-4a30-88eb-b655017c65fa-xtables-lock\") pod \"kube-proxy-lzdzh\" (UID: \"1d123518-daee-4a30-88eb-b655017c65fa\") " pod="kube-system/kube-proxy-lzdzh" Feb 13 15:18:51.148727 kubelet[2457]: I0213 15:18:51.148417 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/61d5431d-579e-4c05-a2bd-0b0170963616-cni\") pod \"kube-flannel-ds-npdzh\" (UID: \"61d5431d-579e-4c05-a2bd-0b0170963616\") " pod="kube-flannel/kube-flannel-ds-npdzh" Feb 13 15:18:51.148727 kubelet[2457]: I0213 15:18:51.148434 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpdcl\" (UniqueName: \"kubernetes.io/projected/61d5431d-579e-4c05-a2bd-0b0170963616-kube-api-access-bpdcl\") pod \"kube-flannel-ds-npdzh\" (UID: \"61d5431d-579e-4c05-a2bd-0b0170963616\") " pod="kube-flannel/kube-flannel-ds-npdzh" Feb 13 15:18:51.148727 kubelet[2457]: I0213 15:18:51.148452 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/61d5431d-579e-4c05-a2bd-0b0170963616-cni-plugin\") pod \"kube-flannel-ds-npdzh\" (UID: \"61d5431d-579e-4c05-a2bd-0b0170963616\") " pod="kube-flannel/kube-flannel-ds-npdzh" Feb 13 15:18:51.148845 kubelet[2457]: I0213 15:18:51.148466 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1d123518-daee-4a30-88eb-b655017c65fa-kube-proxy\") pod \"kube-proxy-lzdzh\" (UID: \"1d123518-daee-4a30-88eb-b655017c65fa\") " pod="kube-system/kube-proxy-lzdzh" Feb 13 15:18:51.148845 kubelet[2457]: I0213 15:18:51.148481 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgk94\" (UniqueName: \"kubernetes.io/projected/1d123518-daee-4a30-88eb-b655017c65fa-kube-api-access-cgk94\") pod \"kube-proxy-lzdzh\" (UID: \"1d123518-daee-4a30-88eb-b655017c65fa\") " pod="kube-system/kube-proxy-lzdzh" Feb 13 15:18:51.148845 kubelet[2457]: I0213 15:18:51.148495 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/61d5431d-579e-4c05-a2bd-0b0170963616-run\") pod \"kube-flannel-ds-npdzh\" (UID: \"61d5431d-579e-4c05-a2bd-0b0170963616\") " pod="kube-flannel/kube-flannel-ds-npdzh" Feb 13 15:18:51.293620 kubelet[2457]: E0213 15:18:51.293494 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:51.294865 containerd[1447]: time="2025-02-13T15:18:51.294538199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lzdzh,Uid:1d123518-daee-4a30-88eb-b655017c65fa,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:51.303550 kubelet[2457]: E0213 15:18:51.303519 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:51.304325 containerd[1447]: time="2025-02-13T15:18:51.304289402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-npdzh,Uid:61d5431d-579e-4c05-a2bd-0b0170963616,Namespace:kube-flannel,Attempt:0,}" Feb 13 15:18:51.330485 containerd[1447]: time="2025-02-13T15:18:51.329542501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:51.330485 containerd[1447]: time="2025-02-13T15:18:51.329589925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:51.330485 containerd[1447]: time="2025-02-13T15:18:51.329601402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:51.330485 containerd[1447]: time="2025-02-13T15:18:51.329679656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:51.338135 containerd[1447]: time="2025-02-13T15:18:51.338054636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:51.338135 containerd[1447]: time="2025-02-13T15:18:51.338117975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:51.338307 containerd[1447]: time="2025-02-13T15:18:51.338134809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:51.338597 containerd[1447]: time="2025-02-13T15:18:51.338436709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:51.349125 systemd[1]: Started cri-containerd-5ca21e2e3046ed85112566d198c9d881c08807b7f25af4938ea98bd1a00ff676.scope - libcontainer container 5ca21e2e3046ed85112566d198c9d881c08807b7f25af4938ea98bd1a00ff676. Feb 13 15:18:51.354332 systemd[1]: Started cri-containerd-747497f64ce7c11b5aa6fd862a7d4dc755e6b23d13dc83aa1f8ea89a4c73deb7.scope - libcontainer container 747497f64ce7c11b5aa6fd862a7d4dc755e6b23d13dc83aa1f8ea89a4c73deb7. Feb 13 15:18:51.370719 containerd[1447]: time="2025-02-13T15:18:51.370675450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lzdzh,Uid:1d123518-daee-4a30-88eb-b655017c65fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ca21e2e3046ed85112566d198c9d881c08807b7f25af4938ea98bd1a00ff676\"" Feb 13 15:18:51.371381 kubelet[2457]: E0213 15:18:51.371331 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:51.375836 containerd[1447]: time="2025-02-13T15:18:51.375198788Z" level=info msg="CreateContainer within sandbox \"5ca21e2e3046ed85112566d198c9d881c08807b7f25af4938ea98bd1a00ff676\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:18:51.388341 containerd[1447]: time="2025-02-13T15:18:51.388301640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-npdzh,Uid:61d5431d-579e-4c05-a2bd-0b0170963616,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"747497f64ce7c11b5aa6fd862a7d4dc755e6b23d13dc83aa1f8ea89a4c73deb7\"" Feb 13 15:18:51.389119 kubelet[2457]: E0213 15:18:51.389097 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:51.390267 containerd[1447]: time="2025-02-13T15:18:51.390240076Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 15:18:51.390974 containerd[1447]: time="2025-02-13T15:18:51.390932087Z" level=info msg="CreateContainer within sandbox \"5ca21e2e3046ed85112566d198c9d881c08807b7f25af4938ea98bd1a00ff676\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"51e85b86d023458f1704671e8c6fe1c5de3396f97822b4902ee2e7bf6e616621\"" Feb 13 15:18:51.392027 containerd[1447]: time="2025-02-13T15:18:51.391555240Z" level=info msg="StartContainer for \"51e85b86d023458f1704671e8c6fe1c5de3396f97822b4902ee2e7bf6e616621\"" Feb 13 15:18:51.422141 systemd[1]: Started cri-containerd-51e85b86d023458f1704671e8c6fe1c5de3396f97822b4902ee2e7bf6e616621.scope - libcontainer container 51e85b86d023458f1704671e8c6fe1c5de3396f97822b4902ee2e7bf6e616621. Feb 13 15:18:51.450548 containerd[1447]: time="2025-02-13T15:18:51.450442336Z" level=info msg="StartContainer for \"51e85b86d023458f1704671e8c6fe1c5de3396f97822b4902ee2e7bf6e616621\" returns successfully" Feb 13 15:18:52.053259 kubelet[2457]: E0213 15:18:52.053227 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:52.062458 kubelet[2457]: I0213 15:18:52.062384 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lzdzh" podStartSLOduration=2.062369404 podStartE2EDuration="2.062369404s" podCreationTimestamp="2025-02-13 15:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:52.062118684 +0000 UTC m=+9.120280589" watchObservedRunningTime="2025-02-13 15:18:52.062369404 +0000 UTC m=+9.120531349" Feb 13 15:18:52.587939 kubelet[2457]: E0213 15:18:52.587680 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:52.704977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2141380050.mount: Deactivated successfully. Feb 13 15:18:52.739185 containerd[1447]: time="2025-02-13T15:18:52.739137575Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:52.740020 containerd[1447]: time="2025-02-13T15:18:52.739734903Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 15:18:52.741193 containerd[1447]: time="2025-02-13T15:18:52.740752256Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:52.743109 containerd[1447]: time="2025-02-13T15:18:52.743073630Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:52.744057 containerd[1447]: time="2025-02-13T15:18:52.744025724Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.353750179s" Feb 13 15:18:52.744160 containerd[1447]: time="2025-02-13T15:18:52.744143526Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 15:18:52.746747 containerd[1447]: time="2025-02-13T15:18:52.746705102Z" level=info msg="CreateContainer within sandbox \"747497f64ce7c11b5aa6fd862a7d4dc755e6b23d13dc83aa1f8ea89a4c73deb7\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 15:18:52.757221 containerd[1447]: time="2025-02-13T15:18:52.757154663Z" level=info msg="CreateContainer within sandbox \"747497f64ce7c11b5aa6fd862a7d4dc755e6b23d13dc83aa1f8ea89a4c73deb7\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"c658d264a62047d8af8b34b49a1f8b4d5758dc4167218a20d0b691a8a303817d\"" Feb 13 15:18:52.757677 containerd[1447]: time="2025-02-13T15:18:52.757647944Z" level=info msg="StartContainer for \"c658d264a62047d8af8b34b49a1f8b4d5758dc4167218a20d0b691a8a303817d\"" Feb 13 15:18:52.782439 systemd[1]: Started cri-containerd-c658d264a62047d8af8b34b49a1f8b4d5758dc4167218a20d0b691a8a303817d.scope - libcontainer container c658d264a62047d8af8b34b49a1f8b4d5758dc4167218a20d0b691a8a303817d. Feb 13 15:18:52.810300 containerd[1447]: time="2025-02-13T15:18:52.810245313Z" level=info msg="StartContainer for \"c658d264a62047d8af8b34b49a1f8b4d5758dc4167218a20d0b691a8a303817d\" returns successfully" Feb 13 15:18:52.816481 systemd[1]: cri-containerd-c658d264a62047d8af8b34b49a1f8b4d5758dc4167218a20d0b691a8a303817d.scope: Deactivated successfully. Feb 13 15:18:52.856543 containerd[1447]: time="2025-02-13T15:18:52.856370244Z" level=info msg="shim disconnected" id=c658d264a62047d8af8b34b49a1f8b4d5758dc4167218a20d0b691a8a303817d namespace=k8s.io Feb 13 15:18:52.856543 containerd[1447]: time="2025-02-13T15:18:52.856465933Z" level=warning msg="cleaning up after shim disconnected" id=c658d264a62047d8af8b34b49a1f8b4d5758dc4167218a20d0b691a8a303817d namespace=k8s.io Feb 13 15:18:52.856543 containerd[1447]: time="2025-02-13T15:18:52.856475410Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:53.059038 kubelet[2457]: E0213 15:18:53.059000 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:53.060087 kubelet[2457]: E0213 15:18:53.060048 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:53.061713 containerd[1447]: time="2025-02-13T15:18:53.061659435Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 15:18:54.544534 kubelet[2457]: E0213 15:18:54.544416 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:54.556795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount727834025.mount: Deactivated successfully. Feb 13 15:18:55.038439 containerd[1447]: time="2025-02-13T15:18:55.038395799Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:55.038991 containerd[1447]: time="2025-02-13T15:18:55.038934282Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Feb 13 15:18:55.039836 containerd[1447]: time="2025-02-13T15:18:55.039810466Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:55.043917 containerd[1447]: time="2025-02-13T15:18:55.043875918Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:55.045146 containerd[1447]: time="2025-02-13T15:18:55.044993191Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.98329049s" Feb 13 15:18:55.045146 containerd[1447]: time="2025-02-13T15:18:55.045037178Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 15:18:55.047046 containerd[1447]: time="2025-02-13T15:18:55.047018319Z" level=info msg="CreateContainer within sandbox \"747497f64ce7c11b5aa6fd862a7d4dc755e6b23d13dc83aa1f8ea89a4c73deb7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:18:55.059707 containerd[1447]: time="2025-02-13T15:18:55.059667782Z" level=info msg="CreateContainer within sandbox \"747497f64ce7c11b5aa6fd862a7d4dc755e6b23d13dc83aa1f8ea89a4c73deb7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"894c05d7b215db10fb7a0e800d0ba09c578f05dc18175f30cd0cfa9a624d3a96\"" Feb 13 15:18:55.060797 containerd[1447]: time="2025-02-13T15:18:55.060760342Z" level=info msg="StartContainer for \"894c05d7b215db10fb7a0e800d0ba09c578f05dc18175f30cd0cfa9a624d3a96\"" Feb 13 15:18:55.092163 systemd[1]: Started cri-containerd-894c05d7b215db10fb7a0e800d0ba09c578f05dc18175f30cd0cfa9a624d3a96.scope - libcontainer container 894c05d7b215db10fb7a0e800d0ba09c578f05dc18175f30cd0cfa9a624d3a96. Feb 13 15:18:55.119015 containerd[1447]: time="2025-02-13T15:18:55.118975446Z" level=info msg="StartContainer for \"894c05d7b215db10fb7a0e800d0ba09c578f05dc18175f30cd0cfa9a624d3a96\" returns successfully" Feb 13 15:18:55.119540 systemd[1]: cri-containerd-894c05d7b215db10fb7a0e800d0ba09c578f05dc18175f30cd0cfa9a624d3a96.scope: Deactivated successfully. Feb 13 15:18:55.159150 kubelet[2457]: I0213 15:18:55.159113 2457 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 15:18:55.196541 systemd[1]: Created slice kubepods-burstable-pod0cda0c53_f37a_4261_a4f2_1587ed7b3f2d.slice - libcontainer container kubepods-burstable-pod0cda0c53_f37a_4261_a4f2_1587ed7b3f2d.slice. Feb 13 15:18:55.201320 systemd[1]: Created slice kubepods-burstable-pod0317a8e9_d27a_48d6_a422_43c6840cf8db.slice - libcontainer container kubepods-burstable-pod0317a8e9_d27a_48d6_a422_43c6840cf8db.slice. Feb 13 15:18:55.220882 containerd[1447]: time="2025-02-13T15:18:55.220792646Z" level=info msg="shim disconnected" id=894c05d7b215db10fb7a0e800d0ba09c578f05dc18175f30cd0cfa9a624d3a96 namespace=k8s.io Feb 13 15:18:55.220882 containerd[1447]: time="2025-02-13T15:18:55.220851789Z" level=warning msg="cleaning up after shim disconnected" id=894c05d7b215db10fb7a0e800d0ba09c578f05dc18175f30cd0cfa9a624d3a96 namespace=k8s.io Feb 13 15:18:55.220882 containerd[1447]: time="2025-02-13T15:18:55.220860306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:55.283262 kubelet[2457]: I0213 15:18:55.283222 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0317a8e9-d27a-48d6-a422-43c6840cf8db-config-volume\") pod \"coredns-6f6b679f8f-24z9g\" (UID: \"0317a8e9-d27a-48d6-a422-43c6840cf8db\") " pod="kube-system/coredns-6f6b679f8f-24z9g" Feb 13 15:18:55.283262 kubelet[2457]: I0213 15:18:55.283263 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cda0c53-f37a-4261-a4f2-1587ed7b3f2d-config-volume\") pod \"coredns-6f6b679f8f-w7fzd\" (UID: \"0cda0c53-f37a-4261-a4f2-1587ed7b3f2d\") " pod="kube-system/coredns-6f6b679f8f-w7fzd" Feb 13 15:18:55.283429 kubelet[2457]: I0213 15:18:55.283284 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgqtg\" (UniqueName: \"kubernetes.io/projected/0317a8e9-d27a-48d6-a422-43c6840cf8db-kube-api-access-cgqtg\") pod \"coredns-6f6b679f8f-24z9g\" (UID: \"0317a8e9-d27a-48d6-a422-43c6840cf8db\") " pod="kube-system/coredns-6f6b679f8f-24z9g" Feb 13 15:18:55.283429 kubelet[2457]: I0213 15:18:55.283304 2457 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s6kx\" (UniqueName: \"kubernetes.io/projected/0cda0c53-f37a-4261-a4f2-1587ed7b3f2d-kube-api-access-8s6kx\") pod \"coredns-6f6b679f8f-w7fzd\" (UID: \"0cda0c53-f37a-4261-a4f2-1587ed7b3f2d\") " pod="kube-system/coredns-6f6b679f8f-w7fzd" Feb 13 15:18:55.420121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-894c05d7b215db10fb7a0e800d0ba09c578f05dc18175f30cd0cfa9a624d3a96-rootfs.mount: Deactivated successfully. Feb 13 15:18:55.499788 kubelet[2457]: E0213 15:18:55.499750 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:55.500859 containerd[1447]: time="2025-02-13T15:18:55.500471698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w7fzd,Uid:0cda0c53-f37a-4261-a4f2-1587ed7b3f2d,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:55.503427 kubelet[2457]: E0213 15:18:55.503391 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:55.504154 containerd[1447]: time="2025-02-13T15:18:55.503866546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-24z9g,Uid:0317a8e9-d27a-48d6-a422-43c6840cf8db,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:55.562545 containerd[1447]: time="2025-02-13T15:18:55.562482693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w7fzd,Uid:0cda0c53-f37a-4261-a4f2-1587ed7b3f2d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a9a28fe1a58bf2b84e4c95562ec6a11fd28540da64950704769ec4f2de24c9d2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:18:55.562769 kubelet[2457]: E0213 15:18:55.562727 2457 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9a28fe1a58bf2b84e4c95562ec6a11fd28540da64950704769ec4f2de24c9d2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:18:55.563048 kubelet[2457]: E0213 15:18:55.562804 2457 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9a28fe1a58bf2b84e4c95562ec6a11fd28540da64950704769ec4f2de24c9d2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-w7fzd" Feb 13 15:18:55.563048 kubelet[2457]: E0213 15:18:55.562826 2457 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9a28fe1a58bf2b84e4c95562ec6a11fd28540da64950704769ec4f2de24c9d2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-w7fzd" Feb 13 15:18:55.563048 kubelet[2457]: E0213 15:18:55.562862 2457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-w7fzd_kube-system(0cda0c53-f37a-4261-a4f2-1587ed7b3f2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-w7fzd_kube-system(0cda0c53-f37a-4261-a4f2-1587ed7b3f2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9a28fe1a58bf2b84e4c95562ec6a11fd28540da64950704769ec4f2de24c9d2\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-w7fzd" podUID="0cda0c53-f37a-4261-a4f2-1587ed7b3f2d" Feb 13 15:18:55.576732 containerd[1447]: time="2025-02-13T15:18:55.576671185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-24z9g,Uid:0317a8e9-d27a-48d6-a422-43c6840cf8db,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"96501415469d9a66ecfdee9a710943c62318d4a81d4c8ab4e0416dab7b1be7d3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:18:55.576931 kubelet[2457]: E0213 15:18:55.576881 2457 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96501415469d9a66ecfdee9a710943c62318d4a81d4c8ab4e0416dab7b1be7d3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:18:55.576994 kubelet[2457]: E0213 15:18:55.576969 2457 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96501415469d9a66ecfdee9a710943c62318d4a81d4c8ab4e0416dab7b1be7d3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-24z9g" Feb 13 15:18:55.576994 kubelet[2457]: E0213 15:18:55.576988 2457 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96501415469d9a66ecfdee9a710943c62318d4a81d4c8ab4e0416dab7b1be7d3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-24z9g" Feb 13 15:18:55.577052 kubelet[2457]: E0213 15:18:55.577027 2457 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-24z9g_kube-system(0317a8e9-d27a-48d6-a422-43c6840cf8db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-24z9g_kube-system(0317a8e9-d27a-48d6-a422-43c6840cf8db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96501415469d9a66ecfdee9a710943c62318d4a81d4c8ab4e0416dab7b1be7d3\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-24z9g" podUID="0317a8e9-d27a-48d6-a422-43c6840cf8db" Feb 13 15:18:56.066747 kubelet[2457]: E0213 15:18:56.066703 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:56.069601 containerd[1447]: time="2025-02-13T15:18:56.069197964Z" level=info msg="CreateContainer within sandbox \"747497f64ce7c11b5aa6fd862a7d4dc755e6b23d13dc83aa1f8ea89a4c73deb7\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 15:18:56.081556 containerd[1447]: time="2025-02-13T15:18:56.081494122Z" level=info msg="CreateContainer within sandbox \"747497f64ce7c11b5aa6fd862a7d4dc755e6b23d13dc83aa1f8ea89a4c73deb7\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"855ea8b660a500db6d41a6fd760ff43bd22f9477e092267b5970d84b3f8e2339\"" Feb 13 15:18:56.081972 containerd[1447]: time="2025-02-13T15:18:56.081933278Z" level=info msg="StartContainer for \"855ea8b660a500db6d41a6fd760ff43bd22f9477e092267b5970d84b3f8e2339\"" Feb 13 15:18:56.109047 systemd[1]: Started cri-containerd-855ea8b660a500db6d41a6fd760ff43bd22f9477e092267b5970d84b3f8e2339.scope - libcontainer container 855ea8b660a500db6d41a6fd760ff43bd22f9477e092267b5970d84b3f8e2339. Feb 13 15:18:56.138630 containerd[1447]: time="2025-02-13T15:18:56.138572760Z" level=info msg="StartContainer for \"855ea8b660a500db6d41a6fd760ff43bd22f9477e092267b5970d84b3f8e2339\" returns successfully" Feb 13 15:18:56.421165 systemd[1]: run-netns-cni\x2daaa457bc\x2d8525\x2d122e\x2d90e3\x2df5a25f88dd5a.mount: Deactivated successfully. Feb 13 15:18:56.421245 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a9a28fe1a58bf2b84e4c95562ec6a11fd28540da64950704769ec4f2de24c9d2-shm.mount: Deactivated successfully. Feb 13 15:18:56.421296 systemd[1]: run-netns-cni\x2d6d6513d7\x2d265d\x2d08e5\x2d175c\x2db20c25d61312.mount: Deactivated successfully. Feb 13 15:18:56.421340 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96501415469d9a66ecfdee9a710943c62318d4a81d4c8ab4e0416dab7b1be7d3-shm.mount: Deactivated successfully. Feb 13 15:18:57.071128 kubelet[2457]: E0213 15:18:57.071085 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:57.082305 kubelet[2457]: I0213 15:18:57.082244 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-npdzh" podStartSLOduration=3.42611825 podStartE2EDuration="7.08222899s" podCreationTimestamp="2025-02-13 15:18:50 +0000 UTC" firstStartedPulling="2025-02-13 15:18:51.389801902 +0000 UTC m=+8.447963847" lastFinishedPulling="2025-02-13 15:18:55.045912642 +0000 UTC m=+12.104074587" observedRunningTime="2025-02-13 15:18:57.082093867 +0000 UTC m=+14.140255932" watchObservedRunningTime="2025-02-13 15:18:57.08222899 +0000 UTC m=+14.140390935" Feb 13 15:18:57.233402 systemd-networkd[1382]: flannel.1: Link UP Feb 13 15:18:57.233409 systemd-networkd[1382]: flannel.1: Gained carrier Feb 13 15:18:58.072796 kubelet[2457]: E0213 15:18:58.072761 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:58.799291 systemd-networkd[1382]: flannel.1: Gained IPv6LL Feb 13 15:18:59.123847 update_engine[1425]: I20250213 15:18:59.123697 1425 update_attempter.cc:509] Updating boot flags... Feb 13 15:18:59.147988 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3108) Feb 13 15:19:07.022006 kubelet[2457]: E0213 15:19:07.021881 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:19:07.022764 containerd[1447]: time="2025-02-13T15:19:07.022714074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w7fzd,Uid:0cda0c53-f37a-4261-a4f2-1587ed7b3f2d,Namespace:kube-system,Attempt:0,}" Feb 13 15:19:07.077895 systemd-networkd[1382]: cni0: Link UP Feb 13 15:19:07.077901 systemd-networkd[1382]: cni0: Gained carrier Feb 13 15:19:07.080774 systemd-networkd[1382]: cni0: Lost carrier Feb 13 15:19:07.087556 systemd-networkd[1382]: veth58a12397: Link UP Feb 13 15:19:07.090266 kernel: cni0: port 1(veth58a12397) entered blocking state Feb 13 15:19:07.090339 kernel: cni0: port 1(veth58a12397) entered disabled state Feb 13 15:19:07.092314 kernel: veth58a12397: entered allmulticast mode Feb 13 15:19:07.096744 kernel: veth58a12397: entered promiscuous mode Feb 13 15:19:07.096849 kernel: cni0: port 1(veth58a12397) entered blocking state Feb 13 15:19:07.096866 kernel: cni0: port 1(veth58a12397) entered forwarding state Feb 13 15:19:07.104975 kernel: cni0: port 1(veth58a12397) entered disabled state Feb 13 15:19:07.115317 kernel: cni0: port 1(veth58a12397) entered blocking state Feb 13 15:19:07.115425 kernel: cni0: port 1(veth58a12397) entered forwarding state Feb 13 15:19:07.115613 systemd-networkd[1382]: veth58a12397: Gained carrier Feb 13 15:19:07.115856 systemd-networkd[1382]: cni0: Gained carrier Feb 13 15:19:07.117596 containerd[1447]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} Feb 13 15:19:07.117596 containerd[1447]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:19:07.139354 containerd[1447]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T15:19:07.139132588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:19:07.139354 containerd[1447]: time="2025-02-13T15:19:07.139235088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:19:07.139354 containerd[1447]: time="2025-02-13T15:19:07.139252004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:19:07.139539 containerd[1447]: time="2025-02-13T15:19:07.139407773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:19:07.166666 systemd[1]: Started cri-containerd-00280637ddbd6809305231e1b73556c04e2c749c9771fa97894251f6ccf51640.scope - libcontainer container 00280637ddbd6809305231e1b73556c04e2c749c9771fa97894251f6ccf51640. Feb 13 15:19:07.178567 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:19:07.197934 containerd[1447]: time="2025-02-13T15:19:07.197894855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w7fzd,Uid:0cda0c53-f37a-4261-a4f2-1587ed7b3f2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"00280637ddbd6809305231e1b73556c04e2c749c9771fa97894251f6ccf51640\"" Feb 13 15:19:07.199150 kubelet[2457]: E0213 15:19:07.198998 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:19:07.204793 containerd[1447]: time="2025-02-13T15:19:07.204743008Z" level=info msg="CreateContainer within sandbox \"00280637ddbd6809305231e1b73556c04e2c749c9771fa97894251f6ccf51640\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:19:07.224325 containerd[1447]: time="2025-02-13T15:19:07.224239755Z" level=info msg="CreateContainer within sandbox \"00280637ddbd6809305231e1b73556c04e2c749c9771fa97894251f6ccf51640\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9fa588399219fa7691b18ea20e1eb27f13640e3bce89c48a14e88a5688ba22de\"" Feb 13 15:19:07.225078 containerd[1447]: time="2025-02-13T15:19:07.224902182Z" level=info msg="StartContainer for \"9fa588399219fa7691b18ea20e1eb27f13640e3bce89c48a14e88a5688ba22de\"" Feb 13 15:19:07.262191 systemd[1]: Started cri-containerd-9fa588399219fa7691b18ea20e1eb27f13640e3bce89c48a14e88a5688ba22de.scope - libcontainer container 9fa588399219fa7691b18ea20e1eb27f13640e3bce89c48a14e88a5688ba22de. Feb 13 15:19:07.296877 containerd[1447]: time="2025-02-13T15:19:07.296736199Z" level=info msg="StartContainer for \"9fa588399219fa7691b18ea20e1eb27f13640e3bce89c48a14e88a5688ba22de\" returns successfully" Feb 13 15:19:08.021349 kubelet[2457]: E0213 15:19:08.021319 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:19:08.022387 containerd[1447]: time="2025-02-13T15:19:08.022346499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-24z9g,Uid:0317a8e9-d27a-48d6-a422-43c6840cf8db,Namespace:kube-system,Attempt:0,}" Feb 13 15:19:08.061338 systemd[1]: Started sshd@5-10.0.0.24:22-10.0.0.1:43388.service - OpenSSH per-connection server daemon (10.0.0.1:43388). Feb 13 15:19:08.093440 kubelet[2457]: E0213 15:19:08.093394 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:19:08.113285 kubelet[2457]: I0213 15:19:08.113204 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-w7fzd" podStartSLOduration=17.113141537 podStartE2EDuration="17.113141537s" podCreationTimestamp="2025-02-13 15:18:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:19:08.111084135 +0000 UTC m=+25.169246080" watchObservedRunningTime="2025-02-13 15:19:08.113141537 +0000 UTC m=+25.171303482" Feb 13 15:19:08.118173 systemd-networkd[1382]: veth2bfdeff6: Link UP Feb 13 15:19:08.119868 kernel: cni0: port 2(veth2bfdeff6) entered blocking state Feb 13 15:19:08.119974 kernel: cni0: port 2(veth2bfdeff6) entered disabled state Feb 13 15:19:08.128040 kernel: veth2bfdeff6: entered allmulticast mode Feb 13 15:19:08.129248 kernel: veth2bfdeff6: entered promiscuous mode Feb 13 15:19:08.130129 sshd[3274]: Accepted publickey for core from 10.0.0.1 port 43388 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:19:08.137775 sshd-session[3274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:08.152162 kernel: cni0: port 2(veth2bfdeff6) entered blocking state Feb 13 15:19:08.152250 kernel: cni0: port 2(veth2bfdeff6) entered forwarding state Feb 13 15:19:08.153525 systemd-networkd[1382]: veth2bfdeff6: Gained carrier Feb 13 15:19:08.158942 containerd[1447]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} Feb 13 15:19:08.158942 containerd[1447]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:19:08.160239 systemd-logind[1422]: New session 6 of user core. Feb 13 15:19:08.162217 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:19:08.181859 containerd[1447]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T15:19:08.181354342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:19:08.181859 containerd[1447]: time="2025-02-13T15:19:08.181822732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:19:08.181859 containerd[1447]: time="2025-02-13T15:19:08.181844607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:19:08.182067 containerd[1447]: time="2025-02-13T15:19:08.181940349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:19:08.207156 systemd-networkd[1382]: cni0: Gained IPv6LL Feb 13 15:19:08.207803 systemd-networkd[1382]: veth58a12397: Gained IPv6LL Feb 13 15:19:08.208163 systemd[1]: Started cri-containerd-99994d4bf8611c57e6534189d3b34fb4a1e92e9fba73eb5d815f85f2ea21525b.scope - libcontainer container 99994d4bf8611c57e6534189d3b34fb4a1e92e9fba73eb5d815f85f2ea21525b. Feb 13 15:19:08.226456 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:19:08.251247 containerd[1447]: time="2025-02-13T15:19:08.251111849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-24z9g,Uid:0317a8e9-d27a-48d6-a422-43c6840cf8db,Namespace:kube-system,Attempt:0,} returns sandbox id \"99994d4bf8611c57e6534189d3b34fb4a1e92e9fba73eb5d815f85f2ea21525b\"" Feb 13 15:19:08.252278 kubelet[2457]: E0213 15:19:08.252255 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:19:08.255072 containerd[1447]: time="2025-02-13T15:19:08.254865683Z" level=info msg="CreateContainer within sandbox \"99994d4bf8611c57e6534189d3b34fb4a1e92e9fba73eb5d815f85f2ea21525b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:19:08.325883 sshd[3310]: Connection closed by 10.0.0.1 port 43388 Feb 13 15:19:08.326270 sshd-session[3274]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:08.330417 systemd[1]: sshd@5-10.0.0.24:22-10.0.0.1:43388.service: Deactivated successfully. Feb 13 15:19:08.332295 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:19:08.332914 systemd-logind[1422]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:19:08.334423 systemd-logind[1422]: Removed session 6. Feb 13 15:19:08.346900 containerd[1447]: time="2025-02-13T15:19:08.346845611Z" level=info msg="CreateContainer within sandbox \"99994d4bf8611c57e6534189d3b34fb4a1e92e9fba73eb5d815f85f2ea21525b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a2aef53ab6ea7dcaeedda54b242234646e4e01ef8ba66b4cb57dc443441e846\"" Feb 13 15:19:08.347572 containerd[1447]: time="2025-02-13T15:19:08.347449294Z" level=info msg="StartContainer for \"5a2aef53ab6ea7dcaeedda54b242234646e4e01ef8ba66b4cb57dc443441e846\"" Feb 13 15:19:08.380222 systemd[1]: Started cri-containerd-5a2aef53ab6ea7dcaeedda54b242234646e4e01ef8ba66b4cb57dc443441e846.scope - libcontainer container 5a2aef53ab6ea7dcaeedda54b242234646e4e01ef8ba66b4cb57dc443441e846. Feb 13 15:19:08.414974 containerd[1447]: time="2025-02-13T15:19:08.414908805Z" level=info msg="StartContainer for \"5a2aef53ab6ea7dcaeedda54b242234646e4e01ef8ba66b4cb57dc443441e846\" returns successfully" Feb 13 15:19:09.096753 kubelet[2457]: E0213 15:19:09.096720 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:19:09.097490 kubelet[2457]: E0213 15:19:09.096808 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:19:09.142814 kubelet[2457]: I0213 15:19:09.142748 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-24z9g" podStartSLOduration=18.142730289 podStartE2EDuration="18.142730289s" podCreationTimestamp="2025-02-13 15:18:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:19:09.120114167 +0000 UTC m=+26.178276112" watchObservedRunningTime="2025-02-13 15:19:09.142730289 +0000 UTC m=+26.200892234" Feb 13 15:19:09.679084 systemd-networkd[1382]: veth2bfdeff6: Gained IPv6LL Feb 13 15:19:10.098900 kubelet[2457]: E0213 15:19:10.098796 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:19:10.098900 kubelet[2457]: E0213 15:19:10.098862 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:19:11.100487 kubelet[2457]: E0213 15:19:11.100451 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:19:13.363336 systemd[1]: Started sshd@6-10.0.0.24:22-10.0.0.1:53200.service - OpenSSH per-connection server daemon (10.0.0.1:53200). Feb 13 15:19:13.408453 sshd[3429]: Accepted publickey for core from 10.0.0.1 port 53200 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:19:13.409813 sshd-session[3429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:13.415555 systemd-logind[1422]: New session 7 of user core. Feb 13 15:19:13.425185 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:19:13.562527 sshd[3431]: Connection closed by 10.0.0.1 port 53200 Feb 13 15:19:13.562906 sshd-session[3429]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:13.567169 systemd[1]: sshd@6-10.0.0.24:22-10.0.0.1:53200.service: Deactivated successfully. Feb 13 15:19:13.569266 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:19:13.570467 systemd-logind[1422]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:19:13.571543 systemd-logind[1422]: Removed session 7. Feb 13 15:19:18.575740 systemd[1]: Started sshd@7-10.0.0.24:22-10.0.0.1:53212.service - OpenSSH per-connection server daemon (10.0.0.1:53212). Feb 13 15:19:18.622812 sshd[3466]: Accepted publickey for core from 10.0.0.1 port 53212 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:19:18.624228 sshd-session[3466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:18.629178 systemd-logind[1422]: New session 8 of user core. Feb 13 15:19:18.636193 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:19:18.764859 sshd[3468]: Connection closed by 10.0.0.1 port 53212 Feb 13 15:19:18.766546 sshd-session[3466]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:18.779021 systemd[1]: sshd@7-10.0.0.24:22-10.0.0.1:53212.service: Deactivated successfully. Feb 13 15:19:18.782379 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:19:18.787852 systemd-logind[1422]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:19:18.796492 systemd[1]: Started sshd@8-10.0.0.24:22-10.0.0.1:53226.service - OpenSSH per-connection server daemon (10.0.0.1:53226). Feb 13 15:19:18.799627 systemd-logind[1422]: Removed session 8. Feb 13 15:19:18.841899 sshd[3481]: Accepted publickey for core from 10.0.0.1 port 53226 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:19:18.843435 sshd-session[3481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:18.851276 systemd-logind[1422]: New session 9 of user core. Feb 13 15:19:18.861184 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:19:19.036426 sshd[3483]: Connection closed by 10.0.0.1 port 53226 Feb 13 15:19:19.039223 sshd-session[3481]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:19.055450 systemd[1]: sshd@8-10.0.0.24:22-10.0.0.1:53226.service: Deactivated successfully. Feb 13 15:19:19.059935 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:19:19.061354 systemd-logind[1422]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:19:19.068700 systemd[1]: Started sshd@9-10.0.0.24:22-10.0.0.1:53238.service - OpenSSH per-connection server daemon (10.0.0.1:53238). Feb 13 15:19:19.071351 systemd-logind[1422]: Removed session 9. Feb 13 15:19:19.120548 sshd[3493]: Accepted publickey for core from 10.0.0.1 port 53238 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:19:19.122169 sshd-session[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:19.126412 systemd-logind[1422]: New session 10 of user core. Feb 13 15:19:19.136180 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:19:19.257160 sshd[3495]: Connection closed by 10.0.0.1 port 53238 Feb 13 15:19:19.258088 sshd-session[3493]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:19.261531 systemd[1]: sshd@9-10.0.0.24:22-10.0.0.1:53238.service: Deactivated successfully. Feb 13 15:19:19.264335 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:19:19.264967 systemd-logind[1422]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:19:19.266221 systemd-logind[1422]: Removed session 10. Feb 13 15:19:24.267767 systemd[1]: Started sshd@10-10.0.0.24:22-10.0.0.1:44006.service - OpenSSH per-connection server daemon (10.0.0.1:44006). Feb 13 15:19:24.309436 sshd[3531]: Accepted publickey for core from 10.0.0.1 port 44006 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:19:24.310700 sshd-session[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:24.315016 systemd-logind[1422]: New session 11 of user core. Feb 13 15:19:24.325131 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:19:24.438304 sshd[3533]: Connection closed by 10.0.0.1 port 44006 Feb 13 15:19:24.437818 sshd-session[3531]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:24.449375 systemd[1]: sshd@10-10.0.0.24:22-10.0.0.1:44006.service: Deactivated successfully. Feb 13 15:19:24.450830 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:19:24.452020 systemd-logind[1422]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:19:24.466205 systemd[1]: Started sshd@11-10.0.0.24:22-10.0.0.1:44012.service - OpenSSH per-connection server daemon (10.0.0.1:44012). Feb 13 15:19:24.468895 systemd-logind[1422]: Removed session 11. Feb 13 15:19:24.504807 sshd[3545]: Accepted publickey for core from 10.0.0.1 port 44012 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:19:24.505349 sshd-session[3545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:24.509875 systemd-logind[1422]: New session 12 of user core. Feb 13 15:19:24.518108 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:19:24.752934 sshd[3547]: Connection closed by 10.0.0.1 port 44012 Feb 13 15:19:24.755731 sshd-session[3545]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:24.762715 systemd[1]: sshd@11-10.0.0.24:22-10.0.0.1:44012.service: Deactivated successfully. Feb 13 15:19:24.764387 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:19:24.765645 systemd-logind[1422]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:19:24.774213 systemd[1]: Started sshd@12-10.0.0.24:22-10.0.0.1:44024.service - OpenSSH per-connection server daemon (10.0.0.1:44024). Feb 13 15:19:24.775369 systemd-logind[1422]: Removed session 12. Feb 13 15:19:24.812201 sshd[3557]: Accepted publickey for core from 10.0.0.1 port 44024 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:19:24.812717 sshd-session[3557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:24.817012 systemd-logind[1422]: New session 13 of user core. Feb 13 15:19:24.835118 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:19:26.083754 sshd[3559]: Connection closed by 10.0.0.1 port 44024 Feb 13 15:19:26.084276 sshd-session[3557]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:26.092596 systemd[1]: sshd@12-10.0.0.24:22-10.0.0.1:44024.service: Deactivated successfully. Feb 13 15:19:26.100126 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:19:26.102822 systemd-logind[1422]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:19:26.114265 systemd[1]: Started sshd@13-10.0.0.24:22-10.0.0.1:44036.service - OpenSSH per-connection server daemon (10.0.0.1:44036). Feb 13 15:19:26.118903 systemd-logind[1422]: Removed session 13. Feb 13 15:19:26.174012 sshd[3577]: Accepted publickey for core from 10.0.0.1 port 44036 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:19:26.174834 sshd-session[3577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:26.179188 systemd-logind[1422]: New session 14 of user core. Feb 13 15:19:26.191156 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:19:26.421567 sshd[3579]: Connection closed by 10.0.0.1 port 44036 Feb 13 15:19:26.424118 sshd-session[3577]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:26.434674 systemd[1]: sshd@13-10.0.0.24:22-10.0.0.1:44036.service: Deactivated successfully. Feb 13 15:19:26.436620 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:19:26.441124 systemd-logind[1422]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:19:26.454446 systemd[1]: Started sshd@14-10.0.0.24:22-10.0.0.1:44038.service - OpenSSH per-connection server daemon (10.0.0.1:44038). Feb 13 15:19:26.456211 systemd-logind[1422]: Removed session 14. Feb 13 15:19:26.490930 sshd[3590]: Accepted publickey for core from 10.0.0.1 port 44038 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:19:26.492144 sshd-session[3590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:26.495782 systemd-logind[1422]: New session 15 of user core. Feb 13 15:19:26.503124 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:19:26.625192 sshd[3592]: Connection closed by 10.0.0.1 port 44038 Feb 13 15:19:26.625704 sshd-session[3590]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:26.628767 systemd[1]: sshd@14-10.0.0.24:22-10.0.0.1:44038.service: Deactivated successfully. Feb 13 15:19:26.630664 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:19:26.631671 systemd-logind[1422]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:19:26.632442 systemd-logind[1422]: Removed session 15. Feb 13 15:19:31.635268 systemd[1]: Started sshd@15-10.0.0.24:22-10.0.0.1:44040.service - OpenSSH per-connection server daemon (10.0.0.1:44040). Feb 13 15:19:31.699292 sshd[3628]: Accepted publickey for core from 10.0.0.1 port 44040 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:19:31.700519 sshd-session[3628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:31.704909 systemd-logind[1422]: New session 16 of user core. Feb 13 15:19:31.718176 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:19:31.840097 sshd[3630]: Connection closed by 10.0.0.1 port 44040 Feb 13 15:19:31.840683 sshd-session[3628]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:31.844197 systemd[1]: sshd@15-10.0.0.24:22-10.0.0.1:44040.service: Deactivated successfully. Feb 13 15:19:31.846144 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:19:31.846822 systemd-logind[1422]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:19:31.847633 systemd-logind[1422]: Removed session 16. Feb 13 15:19:36.868682 systemd[1]: Started sshd@16-10.0.0.24:22-10.0.0.1:45750.service - OpenSSH per-connection server daemon (10.0.0.1:45750). Feb 13 15:19:36.914890 sshd[3663]: Accepted publickey for core from 10.0.0.1 port 45750 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:19:36.916978 sshd-session[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:36.923605 systemd-logind[1422]: New session 17 of user core. Feb 13 15:19:36.933171 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:19:37.046464 sshd[3665]: Connection closed by 10.0.0.1 port 45750 Feb 13 15:19:37.047024 sshd-session[3663]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:37.051185 systemd[1]: sshd@16-10.0.0.24:22-10.0.0.1:45750.service: Deactivated successfully. Feb 13 15:19:37.053988 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:19:37.054726 systemd-logind[1422]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:19:37.055621 systemd-logind[1422]: Removed session 17. Feb 13 15:19:42.059695 systemd[1]: Started sshd@17-10.0.0.24:22-10.0.0.1:45752.service - OpenSSH per-connection server daemon (10.0.0.1:45752). Feb 13 15:19:42.105406 sshd[3699]: Accepted publickey for core from 10.0.0.1 port 45752 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:19:42.106744 sshd-session[3699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:42.112811 systemd-logind[1422]: New session 18 of user core. Feb 13 15:19:42.125170 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:19:42.241988 sshd[3701]: Connection closed by 10.0.0.1 port 45752 Feb 13 15:19:42.243390 sshd-session[3699]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:42.245992 systemd[1]: sshd@17-10.0.0.24:22-10.0.0.1:45752.service: Deactivated successfully. Feb 13 15:19:42.247611 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:19:42.248895 systemd-logind[1422]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:19:42.250713 systemd-logind[1422]: Removed session 18.