Feb 13 15:10:12.029584 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:10:12.029606 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 13:51:50 -00 2025 Feb 13 15:10:12.029616 kernel: KASLR enabled Feb 13 15:10:12.029621 kernel: efi: EFI v2.7 by EDK II Feb 13 15:10:12.029627 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Feb 13 15:10:12.029633 kernel: random: crng init done Feb 13 15:10:12.029639 kernel: secureboot: Secure boot disabled Feb 13 15:10:12.029645 kernel: ACPI: Early table checksum verification disabled Feb 13 15:10:12.029651 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 15:10:12.029659 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:10:12.029665 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:10:12.029672 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:10:12.029678 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:10:12.029684 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:10:12.029692 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:10:12.029700 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:10:12.029707 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:10:12.029713 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:10:12.029720 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:10:12.029726 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 15:10:12.029733 kernel: NUMA: Failed to initialise from firmware Feb 13 15:10:12.029739 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:10:12.029746 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Feb 13 15:10:12.029759 kernel: Zone ranges: Feb 13 15:10:12.029766 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:10:12.029774 kernel: DMA32 empty Feb 13 15:10:12.029780 kernel: Normal empty Feb 13 15:10:12.029786 kernel: Movable zone start for each node Feb 13 15:10:12.029793 kernel: Early memory node ranges Feb 13 15:10:12.029799 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Feb 13 15:10:12.029805 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Feb 13 15:10:12.029811 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Feb 13 15:10:12.029817 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 15:10:12.029824 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 15:10:12.029830 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 15:10:12.029836 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 15:10:12.029843 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 15:10:12.029850 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 15:10:12.029857 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:10:12.029863 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 15:10:12.029873 kernel: psci: probing for conduit method from ACPI. Feb 13 15:10:12.029880 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:10:12.029887 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:10:12.029895 kernel: psci: Trusted OS migration not required Feb 13 15:10:12.029902 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:10:12.029908 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:10:12.029915 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:10:12.029922 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:10:12.029929 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 15:10:12.029935 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:10:12.029942 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:10:12.029949 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:10:12.029955 kernel: CPU features: detected: Spectre-v4 Feb 13 15:10:12.029963 kernel: CPU features: detected: Spectre-BHB Feb 13 15:10:12.029970 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:10:12.029977 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:10:12.029983 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:10:12.029990 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:10:12.029997 kernel: alternatives: applying boot alternatives Feb 13 15:10:12.030005 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:10:12.030012 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:10:12.030019 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:10:12.030025 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:10:12.030032 kernel: Fallback order for Node 0: 0 Feb 13 15:10:12.030040 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 15:10:12.030047 kernel: Policy zone: DMA Feb 13 15:10:12.030054 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:10:12.030061 kernel: software IO TLB: area num 4. Feb 13 15:10:12.030068 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 15:10:12.030076 kernel: Memory: 2387536K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 184752K reserved, 0K cma-reserved) Feb 13 15:10:12.030083 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:10:12.030090 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:10:12.030104 kernel: rcu: RCU event tracing is enabled. Feb 13 15:10:12.030111 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:10:12.030118 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:10:12.030125 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:10:12.030138 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:10:12.030144 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:10:12.030151 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:10:12.030158 kernel: GICv3: 256 SPIs implemented Feb 13 15:10:12.030164 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:10:12.030170 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:10:12.030177 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:10:12.030183 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:10:12.030199 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:10:12.030206 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:10:12.030214 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:10:12.030224 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 15:10:12.030264 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 15:10:12.030308 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:10:12.030317 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:10:12.030323 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:10:12.030330 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:10:12.030337 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:10:12.030344 kernel: arm-pv: using stolen time PV Feb 13 15:10:12.030352 kernel: Console: colour dummy device 80x25 Feb 13 15:10:12.030358 kernel: ACPI: Core revision 20230628 Feb 13 15:10:12.030365 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:10:12.030375 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:10:12.030382 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:10:12.030389 kernel: landlock: Up and running. Feb 13 15:10:12.030396 kernel: SELinux: Initializing. Feb 13 15:10:12.030402 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:10:12.030409 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:10:12.030416 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:10:12.030424 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:10:12.030431 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:10:12.030439 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:10:12.030446 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:10:12.030452 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:10:12.030459 kernel: Remapping and enabling EFI services. Feb 13 15:10:12.030466 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:10:12.030472 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:10:12.030479 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:10:12.030486 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 15:10:12.030493 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:10:12.030501 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:10:12.030508 kernel: Detected PIPT I-cache on CPU2 Feb 13 15:10:12.030519 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 15:10:12.030528 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 15:10:12.030535 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:10:12.030543 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 15:10:12.030550 kernel: Detected PIPT I-cache on CPU3 Feb 13 15:10:12.030557 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 15:10:12.030564 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 15:10:12.030573 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:10:12.030580 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 15:10:12.030587 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:10:12.030595 kernel: SMP: Total of 4 processors activated. Feb 13 15:10:12.030602 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:10:12.030609 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:10:12.030617 kernel: CPU features: detected: Common not Private translations Feb 13 15:10:12.030625 kernel: CPU features: detected: CRC32 instructions Feb 13 15:10:12.030633 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:10:12.030641 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:10:12.030649 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:10:12.030657 kernel: CPU features: detected: Privileged Access Never Feb 13 15:10:12.030664 kernel: CPU features: detected: RAS Extension Support Feb 13 15:10:12.030672 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:10:12.030678 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:10:12.030686 kernel: alternatives: applying system-wide alternatives Feb 13 15:10:12.030693 kernel: devtmpfs: initialized Feb 13 15:10:12.030700 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:10:12.030709 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:10:12.030716 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:10:12.030723 kernel: SMBIOS 3.0.0 present. Feb 13 15:10:12.030730 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 15:10:12.030737 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:10:12.030744 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:10:12.030752 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:10:12.030759 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:10:12.030768 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:10:12.030776 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 15:10:12.030783 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:10:12.030790 kernel: cpuidle: using governor menu Feb 13 15:10:12.030797 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:10:12.030804 kernel: ASID allocator initialised with 32768 entries Feb 13 15:10:12.030811 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:10:12.030819 kernel: Serial: AMBA PL011 UART driver Feb 13 15:10:12.030826 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:10:12.030835 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:10:12.030843 kernel: Modules: 509280 pages in range for PLT usage Feb 13 15:10:12.030850 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:10:12.030865 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:10:12.030881 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:10:12.030889 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:10:12.030896 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:10:12.030904 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:10:12.030911 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:10:12.030920 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:10:12.030927 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:10:12.030935 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:10:12.030942 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:10:12.030949 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:10:12.030957 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:10:12.030964 kernel: ACPI: Interpreter enabled Feb 13 15:10:12.030972 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:10:12.030979 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:10:12.030986 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:10:12.030995 kernel: printk: console [ttyAMA0] enabled Feb 13 15:10:12.031002 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:10:12.031159 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:10:12.031274 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:10:12.031344 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:10:12.031409 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:10:12.031471 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:10:12.031484 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:10:12.031491 kernel: PCI host bridge to bus 0000:00 Feb 13 15:10:12.031565 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:10:12.031627 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:10:12.031687 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:10:12.031747 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:10:12.031829 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:10:12.031910 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:10:12.031980 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 15:10:12.032068 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 15:10:12.032154 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:10:12.032242 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:10:12.032314 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 15:10:12.032384 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 15:10:12.032452 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:10:12.032514 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:10:12.032576 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:10:12.032585 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:10:12.032593 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:10:12.032600 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:10:12.032608 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:10:12.032618 kernel: iommu: Default domain type: Translated Feb 13 15:10:12.032625 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:10:12.032633 kernel: efivars: Registered efivars operations Feb 13 15:10:12.032640 kernel: vgaarb: loaded Feb 13 15:10:12.032647 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:10:12.032655 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:10:12.032662 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:10:12.032670 kernel: pnp: PnP ACPI init Feb 13 15:10:12.032759 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:10:12.032772 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:10:12.032779 kernel: NET: Registered PF_INET protocol family Feb 13 15:10:12.032787 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:10:12.032794 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:10:12.032802 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:10:12.032809 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:10:12.032817 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:10:12.032828 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:10:12.032837 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:10:12.032845 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:10:12.032852 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:10:12.032859 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:10:12.032867 kernel: kvm [1]: HYP mode not available Feb 13 15:10:12.032874 kernel: Initialise system trusted keyrings Feb 13 15:10:12.032881 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:10:12.032889 kernel: Key type asymmetric registered Feb 13 15:10:12.032896 kernel: Asymmetric key parser 'x509' registered Feb 13 15:10:12.032904 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:10:12.032912 kernel: io scheduler mq-deadline registered Feb 13 15:10:12.032920 kernel: io scheduler kyber registered Feb 13 15:10:12.032927 kernel: io scheduler bfq registered Feb 13 15:10:12.032935 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:10:12.032942 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:10:12.032950 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:10:12.033028 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 15:10:12.033038 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:10:12.033045 kernel: thunder_xcv, ver 1.0 Feb 13 15:10:12.033054 kernel: thunder_bgx, ver 1.0 Feb 13 15:10:12.033062 kernel: nicpf, ver 1.0 Feb 13 15:10:12.033069 kernel: nicvf, ver 1.0 Feb 13 15:10:12.033155 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:10:12.033234 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:10:11 UTC (1739459411) Feb 13 15:10:12.033244 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:10:12.033252 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:10:12.033259 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:10:12.033270 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:10:12.033277 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:10:12.033284 kernel: Segment Routing with IPv6 Feb 13 15:10:12.033292 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:10:12.033299 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:10:12.033306 kernel: Key type dns_resolver registered Feb 13 15:10:12.033313 kernel: registered taskstats version 1 Feb 13 15:10:12.033320 kernel: Loading compiled-in X.509 certificates Feb 13 15:10:12.033328 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 03c2ececc548f4ae45f50171451f5c036e2757d4' Feb 13 15:10:12.033336 kernel: Key type .fscrypt registered Feb 13 15:10:12.033344 kernel: Key type fscrypt-provisioning registered Feb 13 15:10:12.033351 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:10:12.033358 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:10:12.033366 kernel: ima: No architecture policies found Feb 13 15:10:12.033373 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:10:12.033381 kernel: clk: Disabling unused clocks Feb 13 15:10:12.033388 kernel: Freeing unused kernel memory: 38336K Feb 13 15:10:12.033395 kernel: Run /init as init process Feb 13 15:10:12.033404 kernel: with arguments: Feb 13 15:10:12.033411 kernel: /init Feb 13 15:10:12.033419 kernel: with environment: Feb 13 15:10:12.033426 kernel: HOME=/ Feb 13 15:10:12.033433 kernel: TERM=linux Feb 13 15:10:12.033441 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:10:12.033449 systemd[1]: Successfully made /usr/ read-only. Feb 13 15:10:12.033459 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:10:12.033469 systemd[1]: Detected virtualization kvm. Feb 13 15:10:12.033477 systemd[1]: Detected architecture arm64. Feb 13 15:10:12.033485 systemd[1]: Running in initrd. Feb 13 15:10:12.033492 systemd[1]: No hostname configured, using default hostname. Feb 13 15:10:12.033501 systemd[1]: Hostname set to . Feb 13 15:10:12.033509 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:10:12.033516 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:10:12.033524 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:10:12.033534 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:10:12.033542 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:10:12.033550 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:10:12.033559 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:10:12.033567 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:10:12.033576 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:10:12.033589 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:10:12.033602 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:10:12.033610 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:10:12.033619 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:10:12.033627 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:10:12.033635 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:10:12.033643 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:10:12.033651 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:10:12.033659 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:10:12.033668 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:10:12.033677 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 15:10:12.033685 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:10:12.033693 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:10:12.033701 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:10:12.033709 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:10:12.033717 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:10:12.033725 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:10:12.033735 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:10:12.033743 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:10:12.033750 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:10:12.033758 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:10:12.033766 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:10:12.033774 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:10:12.033782 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:10:12.033792 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:10:12.033800 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:10:12.033808 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:10:12.033816 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:10:12.033824 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:10:12.033849 systemd-journald[240]: Collecting audit messages is disabled. Feb 13 15:10:12.033871 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:10:12.033880 systemd-journald[240]: Journal started Feb 13 15:10:12.033906 systemd-journald[240]: Runtime Journal (/run/log/journal/d45cba9392ee4a1e843cdeb92c14b74b) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:10:12.019652 systemd-modules-load[241]: Inserted module 'overlay' Feb 13 15:10:12.038105 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:10:12.038126 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:10:12.038860 systemd-modules-load[241]: Inserted module 'br_netfilter' Feb 13 15:10:12.041726 kernel: Bridge firewalling registered Feb 13 15:10:12.040940 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:10:12.046209 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:10:12.049057 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:10:12.052912 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:10:12.056357 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:10:12.057921 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:10:12.060414 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:10:12.074419 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:10:12.076961 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:10:12.084619 dracut-cmdline[281]: dracut-dracut-053 Feb 13 15:10:12.088034 dracut-cmdline[281]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:10:12.126989 systemd-resolved[284]: Positive Trust Anchors: Feb 13 15:10:12.127010 systemd-resolved[284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:10:12.127041 systemd-resolved[284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:10:12.139355 systemd-resolved[284]: Defaulting to hostname 'linux'. Feb 13 15:10:12.140434 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:10:12.141716 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:10:12.179227 kernel: SCSI subsystem initialized Feb 13 15:10:12.184207 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:10:12.193215 kernel: iscsi: registered transport (tcp) Feb 13 15:10:12.205216 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:10:12.205236 kernel: QLogic iSCSI HBA Driver Feb 13 15:10:12.261857 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:10:12.273348 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:10:12.290234 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:10:12.290284 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:10:12.291992 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:10:12.343228 kernel: raid6: neonx8 gen() 15539 MB/s Feb 13 15:10:12.360223 kernel: raid6: neonx4 gen() 15609 MB/s Feb 13 15:10:12.377216 kernel: raid6: neonx2 gen() 13189 MB/s Feb 13 15:10:12.394214 kernel: raid6: neonx1 gen() 10527 MB/s Feb 13 15:10:12.411217 kernel: raid6: int64x8 gen() 6780 MB/s Feb 13 15:10:12.428219 kernel: raid6: int64x4 gen() 7209 MB/s Feb 13 15:10:12.445219 kernel: raid6: int64x2 gen() 6076 MB/s Feb 13 15:10:12.462335 kernel: raid6: int64x1 gen() 4971 MB/s Feb 13 15:10:12.462351 kernel: raid6: using algorithm neonx4 gen() 15609 MB/s Feb 13 15:10:12.480447 kernel: raid6: .... xor() 12442 MB/s, rmw enabled Feb 13 15:10:12.480463 kernel: raid6: using neon recovery algorithm Feb 13 15:10:12.486214 kernel: xor: measuring software checksum speed Feb 13 15:10:12.486243 kernel: 8regs : 21618 MB/sec Feb 13 15:10:12.487489 kernel: 32regs : 18511 MB/sec Feb 13 15:10:12.487502 kernel: arm64_neon : 27785 MB/sec Feb 13 15:10:12.487511 kernel: xor: using function: arm64_neon (27785 MB/sec) Feb 13 15:10:12.540223 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:10:12.553235 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:10:12.562413 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:10:12.576672 systemd-udevd[466]: Using default interface naming scheme 'v255'. Feb 13 15:10:12.580517 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:10:12.595417 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:10:12.615887 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Feb 13 15:10:12.650238 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:10:12.658402 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:10:12.700822 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:10:12.712720 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:10:12.724643 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:10:12.726573 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:10:12.728479 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:10:12.730815 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:10:12.741413 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:10:12.752406 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:10:12.767251 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 15:10:12.777912 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:10:12.778048 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:10:12.778060 kernel: GPT:9289727 != 19775487 Feb 13 15:10:12.778070 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:10:12.778079 kernel: GPT:9289727 != 19775487 Feb 13 15:10:12.778087 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:10:12.778105 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:10:12.773407 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:10:12.773527 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:10:12.775934 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:10:12.778295 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:10:12.778452 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:10:12.783466 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:10:12.791471 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:10:12.805354 kernel: BTRFS: device fsid b3d3c5e7-c505-4391-bb7a-de2a572c0855 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (521) Feb 13 15:10:12.805406 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (510) Feb 13 15:10:12.807366 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:10:12.808873 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:10:12.830578 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:10:12.836956 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:10:12.838266 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:10:12.846976 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:10:12.860400 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:10:12.865419 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:10:12.868199 disk-uuid[555]: Primary Header is updated. Feb 13 15:10:12.868199 disk-uuid[555]: Secondary Entries is updated. Feb 13 15:10:12.868199 disk-uuid[555]: Secondary Header is updated. Feb 13 15:10:12.873231 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:10:12.895098 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:10:13.892216 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:10:13.892510 disk-uuid[556]: The operation has completed successfully. Feb 13 15:10:13.925493 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:10:13.925608 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:10:13.959419 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:10:13.962414 sh[576]: Success Feb 13 15:10:13.973244 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:10:14.007759 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:10:14.022685 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:10:14.024747 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:10:14.036241 kernel: BTRFS info (device dm-0): first mount of filesystem b3d3c5e7-c505-4391-bb7a-de2a572c0855 Feb 13 15:10:14.036291 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:10:14.037587 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:10:14.037606 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:10:14.039204 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:10:14.044919 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:10:14.048433 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:10:14.064432 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:10:14.066282 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:10:14.076914 kernel: BTRFS info (device vda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:10:14.076967 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:10:14.076978 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:10:14.081208 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:10:14.094490 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:10:14.096470 kernel: BTRFS info (device vda6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:10:14.103127 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:10:14.119494 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:10:14.178413 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:10:14.192467 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:10:14.252365 systemd-networkd[763]: lo: Link UP Feb 13 15:10:14.252375 systemd-networkd[763]: lo: Gained carrier Feb 13 15:10:14.253355 systemd-networkd[763]: Enumeration completed Feb 13 15:10:14.253642 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:10:14.254142 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:10:14.254146 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:10:14.254862 systemd-networkd[763]: eth0: Link UP Feb 13 15:10:14.254865 systemd-networkd[763]: eth0: Gained carrier Feb 13 15:10:14.254872 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:10:14.255589 systemd[1]: Reached target network.target - Network. Feb 13 15:10:14.273271 ignition[674]: Ignition 2.20.0 Feb 13 15:10:14.273284 ignition[674]: Stage: fetch-offline Feb 13 15:10:14.273324 ignition[674]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:10:14.273334 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:10:14.273559 ignition[674]: parsed url from cmdline: "" Feb 13 15:10:14.276286 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:10:14.273563 ignition[674]: no config URL provided Feb 13 15:10:14.273567 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:10:14.273574 ignition[674]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:10:14.273602 ignition[674]: op(1): [started] loading QEMU firmware config module Feb 13 15:10:14.273607 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:10:14.283861 ignition[674]: op(1): [finished] loading QEMU firmware config module Feb 13 15:10:14.306218 ignition[674]: parsing config with SHA512: b8bd9bdaaf50c4bbff5b327dffffac22d8573c5362752d30311714ba35067444ee960ae27a0230dadf63552d713a5eee99d979db5727ba6635cdd76d1f805441 Feb 13 15:10:14.313763 unknown[674]: fetched base config from "system" Feb 13 15:10:14.313779 unknown[674]: fetched user config from "qemu" Feb 13 15:10:14.314413 ignition[674]: fetch-offline: fetch-offline passed Feb 13 15:10:14.316103 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:10:14.314513 ignition[674]: Ignition finished successfully Feb 13 15:10:14.317747 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:10:14.324388 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:10:14.337722 ignition[775]: Ignition 2.20.0 Feb 13 15:10:14.337733 ignition[775]: Stage: kargs Feb 13 15:10:14.337894 ignition[775]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:10:14.337904 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:10:14.342018 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:10:14.338767 ignition[775]: kargs: kargs passed Feb 13 15:10:14.338812 ignition[775]: Ignition finished successfully Feb 13 15:10:14.357392 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:10:14.368374 ignition[783]: Ignition 2.20.0 Feb 13 15:10:14.368385 ignition[783]: Stage: disks Feb 13 15:10:14.368550 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:10:14.368560 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:10:14.369485 ignition[783]: disks: disks passed Feb 13 15:10:14.369537 ignition[783]: Ignition finished successfully Feb 13 15:10:14.373224 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:10:14.374809 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:10:14.376463 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:10:14.378598 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:10:14.380661 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:10:14.382902 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:10:14.395402 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:10:14.413913 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:10:14.420224 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:10:15.047323 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:10:15.095215 kernel: EXT4-fs (vda9): mounted filesystem f78dcc36-7881-4d16-ad8b-28e23dfbdad0 r/w with ordered data mode. Quota mode: none. Feb 13 15:10:15.096023 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:10:15.097527 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:10:15.110329 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:10:15.113025 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:10:15.114222 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:10:15.114268 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:10:15.114293 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:10:15.119412 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:10:15.122996 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:10:15.130298 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (802) Feb 13 15:10:15.130456 kernel: BTRFS info (device vda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:10:15.132994 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:10:15.133038 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:10:15.138214 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:10:15.138982 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:10:15.176681 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:10:15.181927 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:10:15.186891 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:10:15.192636 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:10:15.302458 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:10:15.314351 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:10:15.317136 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:10:15.323214 kernel: BTRFS info (device vda6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:10:15.342881 ignition[917]: INFO : Ignition 2.20.0 Feb 13 15:10:15.342881 ignition[917]: INFO : Stage: mount Feb 13 15:10:15.342881 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:10:15.342881 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:10:15.350539 ignition[917]: INFO : mount: mount passed Feb 13 15:10:15.350539 ignition[917]: INFO : Ignition finished successfully Feb 13 15:10:15.344151 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:10:15.346552 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:10:15.360335 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:10:16.034989 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:10:16.049428 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:10:16.062225 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (933) Feb 13 15:10:16.065943 kernel: BTRFS info (device vda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:10:16.065958 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:10:16.065968 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:10:16.071206 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:10:16.072398 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:10:16.097444 ignition[950]: INFO : Ignition 2.20.0 Feb 13 15:10:16.097444 ignition[950]: INFO : Stage: files Feb 13 15:10:16.099362 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:10:16.099362 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:10:16.099362 ignition[950]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:10:16.103355 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:10:16.103355 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:10:16.103355 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:10:16.103355 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:10:16.103355 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:10:16.102534 unknown[950]: wrote ssh authorized keys file for user: core Feb 13 15:10:16.111369 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:10:16.111369 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:10:16.232244 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:10:16.301349 systemd-networkd[763]: eth0: Gained IPv6LL Feb 13 15:10:16.531337 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:10:16.533623 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:10:16.533623 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:10:16.533623 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:10:16.533623 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:10:16.533623 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:10:16.533623 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:10:16.533623 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:10:16.533623 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:10:16.533623 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:10:16.533623 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:10:16.533623 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:10:16.533623 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:10:16.533623 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:10:16.533623 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 15:10:16.947012 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:10:17.412105 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:10:17.412105 ignition[950]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:10:17.416213 ignition[950]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:10:17.416213 ignition[950]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:10:17.416213 ignition[950]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:10:17.416213 ignition[950]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 15:10:17.416213 ignition[950]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:10:17.416213 ignition[950]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:10:17.416213 ignition[950]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 15:10:17.416213 ignition[950]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:10:17.449327 ignition[950]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:10:17.455348 ignition[950]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:10:17.457124 ignition[950]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:10:17.457124 ignition[950]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:10:17.457124 ignition[950]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:10:17.457124 ignition[950]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:10:17.457124 ignition[950]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:10:17.457124 ignition[950]: INFO : files: files passed Feb 13 15:10:17.457124 ignition[950]: INFO : Ignition finished successfully Feb 13 15:10:17.460461 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:10:17.477428 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:10:17.481903 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:10:17.488514 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:10:17.488619 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:10:17.493637 initrd-setup-root-after-ignition[978]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:10:17.497951 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:10:17.497951 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:10:17.501531 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:10:17.503039 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:10:17.504764 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:10:17.515490 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:10:17.539139 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:10:17.539251 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:10:17.541883 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:10:17.544507 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:10:17.547007 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:10:17.566424 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:10:17.581921 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:10:17.584736 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:10:17.597847 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:10:17.599144 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:10:17.601413 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:10:17.603330 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:10:17.603458 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:10:17.606499 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:10:17.609098 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:10:17.611199 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:10:17.613279 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:10:17.615595 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:10:17.617895 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:10:17.620065 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:10:17.622438 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:10:17.624663 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:10:17.626598 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:10:17.628279 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:10:17.628421 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:10:17.631384 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:10:17.633301 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:10:17.635241 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:10:17.636288 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:10:17.637571 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:10:17.637691 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:10:17.640925 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:10:17.641121 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:10:17.643419 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:10:17.645121 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:10:17.646131 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:10:17.648511 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:10:17.650236 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:10:17.652154 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:10:17.652254 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:10:17.654476 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:10:17.654557 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:10:17.656251 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:10:17.656377 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:10:17.658219 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:10:17.658328 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:10:17.672417 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:10:17.673418 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:10:17.673569 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:10:17.679425 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:10:17.680398 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:10:17.680634 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:10:17.685459 ignition[1005]: INFO : Ignition 2.20.0 Feb 13 15:10:17.685459 ignition[1005]: INFO : Stage: umount Feb 13 15:10:17.685459 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:10:17.685459 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:10:17.685459 ignition[1005]: INFO : umount: umount passed Feb 13 15:10:17.685459 ignition[1005]: INFO : Ignition finished successfully Feb 13 15:10:17.685250 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:10:17.685355 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:10:17.688875 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:10:17.688962 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:10:17.694104 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:10:17.694224 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:10:17.696842 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:10:17.697674 systemd[1]: Stopped target network.target - Network. Feb 13 15:10:17.698643 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:10:17.698834 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:10:17.708146 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:10:17.708245 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:10:17.711557 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:10:17.711614 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:10:17.713542 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:10:17.713585 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:10:17.716517 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:10:17.718286 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:10:17.729245 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:10:17.729357 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:10:17.732787 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 15:10:17.733048 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:10:17.733096 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:10:17.736330 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:10:17.736571 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:10:17.736675 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:10:17.740931 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 15:10:17.741340 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:10:17.741396 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:10:17.755333 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:10:17.756237 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:10:17.756301 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:10:17.758579 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:10:17.758635 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:10:17.761976 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:10:17.762019 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:10:17.763295 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:10:17.766470 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:10:17.766824 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:10:17.767532 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:10:17.770846 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:10:17.770899 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:10:17.777263 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:10:17.777362 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:10:17.790864 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:10:17.790998 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:10:17.793313 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:10:17.793349 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:10:17.795614 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:10:17.795660 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:10:17.797359 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:10:17.797409 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:10:17.800251 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:10:17.800297 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:10:17.802946 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:10:17.803072 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:10:17.820388 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:10:17.821441 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:10:17.821504 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:10:17.825158 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:10:17.825226 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:10:17.828816 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:10:17.828917 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:10:17.831464 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:10:17.834002 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:10:17.842795 systemd[1]: Switching root. Feb 13 15:10:17.878518 systemd-journald[240]: Journal stopped Feb 13 15:10:18.741546 systemd-journald[240]: Received SIGTERM from PID 1 (systemd). Feb 13 15:10:18.741610 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:10:18.741624 kernel: SELinux: policy capability open_perms=1 Feb 13 15:10:18.741635 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:10:18.741649 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:10:18.741662 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:10:18.741672 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:10:18.741682 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:10:18.741692 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:10:18.741702 kernel: audit: type=1403 audit(1739459418.041:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:10:18.741712 systemd[1]: Successfully loaded SELinux policy in 31.855ms. Feb 13 15:10:18.741733 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.271ms. Feb 13 15:10:18.741745 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:10:18.741756 systemd[1]: Detected virtualization kvm. Feb 13 15:10:18.741768 systemd[1]: Detected architecture arm64. Feb 13 15:10:18.741778 systemd[1]: Detected first boot. Feb 13 15:10:18.741788 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:10:18.741799 zram_generator::config[1052]: No configuration found. Feb 13 15:10:18.741809 kernel: NET: Registered PF_VSOCK protocol family Feb 13 15:10:18.741819 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:10:18.741830 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 15:10:18.741840 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:10:18.741856 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:10:18.741867 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:10:18.741877 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:10:18.741888 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:10:18.741898 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:10:18.741909 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:10:18.741920 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:10:18.741930 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:10:18.741941 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:10:18.741953 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:10:18.741963 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:10:18.741974 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:10:18.741985 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:10:18.741995 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:10:18.742005 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:10:18.742016 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:10:18.742026 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:10:18.742036 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:10:18.742048 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:10:18.742059 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:10:18.742069 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:10:18.742085 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:10:18.742096 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:10:18.742107 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:10:18.742118 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:10:18.742131 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:10:18.742142 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:10:18.742154 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:10:18.742165 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 15:10:18.742175 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:10:18.742186 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:10:18.742205 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:10:18.742218 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:10:18.742229 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:10:18.742239 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:10:18.742252 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:10:18.742262 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:10:18.742273 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:10:18.742283 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:10:18.742294 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:10:18.742304 systemd[1]: Reached target machines.target - Containers. Feb 13 15:10:18.742314 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:10:18.742325 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:10:18.742338 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:10:18.742349 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:10:18.742359 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:10:18.742369 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:10:18.742380 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:10:18.742392 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:10:18.742402 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:10:18.742413 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:10:18.742425 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:10:18.742435 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:10:18.742445 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:10:18.742456 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:10:18.742466 kernel: fuse: init (API version 7.39) Feb 13 15:10:18.742476 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:10:18.742487 kernel: loop: module loaded Feb 13 15:10:18.742496 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:10:18.742506 kernel: ACPI: bus type drm_connector registered Feb 13 15:10:18.742518 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:10:18.742529 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:10:18.742540 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:10:18.742550 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 15:10:18.742561 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:10:18.742573 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:10:18.742583 systemd[1]: Stopped verity-setup.service. Feb 13 15:10:18.742594 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:10:18.742604 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:10:18.742615 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:10:18.742626 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:10:18.742636 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:10:18.742646 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:10:18.742657 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:10:18.742695 systemd-journald[1124]: Collecting audit messages is disabled. Feb 13 15:10:18.742720 systemd-journald[1124]: Journal started Feb 13 15:10:18.742741 systemd-journald[1124]: Runtime Journal (/run/log/journal/d45cba9392ee4a1e843cdeb92c14b74b) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:10:18.499001 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:10:18.508914 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:10:18.509339 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:10:18.745185 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:10:18.747335 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:10:18.748231 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:10:18.748446 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:10:18.750011 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:10:18.750231 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:10:18.751711 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:10:18.751881 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:10:18.753525 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:10:18.753695 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:10:18.755415 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:10:18.755575 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:10:18.757598 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:10:18.757779 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:10:18.759467 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:10:18.761052 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:10:18.762596 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:10:18.764347 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 15:10:18.778413 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:10:18.782846 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:10:18.796319 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:10:18.798711 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:10:18.799957 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:10:18.800006 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:10:18.802164 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 15:10:18.804737 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:10:18.807068 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:10:18.808418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:10:18.810027 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:10:18.815437 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:10:18.817746 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:10:18.821399 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:10:18.822729 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:10:18.828284 systemd-journald[1124]: Time spent on flushing to /var/log/journal/d45cba9392ee4a1e843cdeb92c14b74b is 24.763ms for 865 entries. Feb 13 15:10:18.828284 systemd-journald[1124]: System Journal (/var/log/journal/d45cba9392ee4a1e843cdeb92c14b74b) is 8M, max 195.6M, 187.6M free. Feb 13 15:10:18.867221 systemd-journald[1124]: Received client request to flush runtime journal. Feb 13 15:10:18.867261 kernel: loop0: detected capacity change from 0 to 123192 Feb 13 15:10:18.867274 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:10:18.826406 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:10:18.832431 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:10:18.840727 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:10:18.846402 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:10:18.854167 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:10:18.859284 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:10:18.860987 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:10:18.862843 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:10:18.871271 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:10:18.882229 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:10:18.886855 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:10:18.897215 kernel: loop1: detected capacity change from 0 to 113512 Feb 13 15:10:18.898529 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 15:10:18.900754 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:10:18.905304 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:10:18.908627 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:10:18.923848 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:10:18.924687 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 15:10:18.936775 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Feb 13 15:10:18.936790 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Feb 13 15:10:18.948225 kernel: loop2: detected capacity change from 0 to 189592 Feb 13 15:10:18.954266 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:10:19.003237 kernel: loop3: detected capacity change from 0 to 123192 Feb 13 15:10:19.010965 kernel: loop4: detected capacity change from 0 to 113512 Feb 13 15:10:19.019287 kernel: loop5: detected capacity change from 0 to 189592 Feb 13 15:10:19.027013 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:10:19.027457 (sd-merge)[1193]: Merged extensions into '/usr'. Feb 13 15:10:19.031653 systemd[1]: Reload requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:10:19.031669 systemd[1]: Reloading... Feb 13 15:10:19.099298 zram_generator::config[1218]: No configuration found. Feb 13 15:10:19.183729 ldconfig[1165]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:10:19.219409 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:10:19.269944 systemd[1]: Reloading finished in 237 ms. Feb 13 15:10:19.287937 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:10:19.290508 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:10:19.303576 systemd[1]: Starting ensure-sysext.service... Feb 13 15:10:19.305525 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:10:19.325931 systemd[1]: Reload requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:10:19.325945 systemd[1]: Reloading... Feb 13 15:10:19.328113 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:10:19.328660 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:10:19.329433 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:10:19.329746 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Feb 13 15:10:19.329865 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Feb 13 15:10:19.333715 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:10:19.333729 systemd-tmpfiles[1256]: Skipping /boot Feb 13 15:10:19.342488 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:10:19.342503 systemd-tmpfiles[1256]: Skipping /boot Feb 13 15:10:19.383218 zram_generator::config[1288]: No configuration found. Feb 13 15:10:19.466952 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:10:19.517056 systemd[1]: Reloading finished in 190 ms. Feb 13 15:10:19.526562 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:10:19.546259 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:10:19.553812 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:10:19.556375 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:10:19.558644 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:10:19.564472 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:10:19.574514 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:10:19.578211 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:10:19.583803 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:10:19.586340 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:10:19.591354 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:10:19.595071 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:10:19.597231 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:10:19.597352 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:10:19.601455 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:10:19.608375 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:10:19.610554 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:10:19.610716 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:10:19.612759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:10:19.612922 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:10:19.615007 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:10:19.615347 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:10:19.617969 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Feb 13 15:10:19.624004 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:10:19.625984 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:10:19.630755 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:10:19.636522 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:10:19.637644 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:10:19.637817 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:10:19.639184 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:10:19.641714 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:10:19.641897 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:10:19.648818 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:10:19.654657 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:10:19.656220 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:10:19.658691 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:10:19.660475 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:10:19.663469 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:10:19.663595 augenrules[1362]: No rules Feb 13 15:10:19.664714 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:10:19.664855 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:10:19.664966 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:10:19.665700 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:10:19.667634 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:10:19.669212 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:10:19.671830 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:10:19.673684 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:10:19.675546 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:10:19.675695 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:10:19.677518 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:10:19.679118 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:10:19.679396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:10:19.681687 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:10:19.681843 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:10:19.691579 systemd[1]: Finished ensure-sysext.service. Feb 13 15:10:19.710420 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:10:19.712307 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:10:19.716998 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:10:19.720391 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:10:19.721304 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1375) Feb 13 15:10:19.744483 systemd-resolved[1324]: Positive Trust Anchors: Feb 13 15:10:19.746073 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:10:19.746279 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:10:19.746311 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:10:19.756239 systemd-resolved[1324]: Defaulting to hostname 'linux'. Feb 13 15:10:19.765284 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:10:19.766537 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:10:19.784842 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:10:19.801341 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:10:19.817824 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:10:19.819228 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:10:19.820568 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:10:19.822254 systemd-networkd[1399]: lo: Link UP Feb 13 15:10:19.822262 systemd-networkd[1399]: lo: Gained carrier Feb 13 15:10:19.826943 systemd-networkd[1399]: Enumeration completed Feb 13 15:10:19.827049 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:10:19.828220 systemd[1]: Reached target network.target - Network. Feb 13 15:10:19.833449 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 15:10:19.834872 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:10:19.834882 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:10:19.836956 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:10:19.837505 systemd-networkd[1399]: eth0: Link UP Feb 13 15:10:19.837513 systemd-networkd[1399]: eth0: Gained carrier Feb 13 15:10:19.837527 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:10:19.851278 systemd-networkd[1399]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:10:19.852594 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Feb 13 15:10:19.853381 systemd-timesyncd[1400]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:10:19.853430 systemd-timesyncd[1400]: Initial clock synchronization to Thu 2025-02-13 15:10:19.943732 UTC. Feb 13 15:10:19.856279 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 15:10:19.868411 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:10:19.869920 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:10:19.875521 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:10:19.890183 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:10:19.902376 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:10:19.926602 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:10:19.928128 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:10:19.929337 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:10:19.930457 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:10:19.931687 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:10:19.933112 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:10:19.934306 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:10:19.935538 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:10:19.936902 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:10:19.936940 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:10:19.937854 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:10:19.939692 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:10:19.941986 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:10:19.945133 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 15:10:19.946542 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 15:10:19.947784 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 15:10:19.954098 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:10:19.955543 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 15:10:19.957798 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:10:19.959438 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:10:19.960596 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:10:19.961561 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:10:19.962520 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:10:19.962551 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:10:19.963400 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:10:19.965027 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:10:19.966869 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:10:19.971323 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:10:19.974896 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:10:19.975970 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:10:19.976933 jq[1432]: false Feb 13 15:10:19.980041 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:10:19.983345 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:10:19.985402 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:10:19.990349 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:10:19.993421 extend-filesystems[1433]: Found loop3 Feb 13 15:10:19.993430 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:10:19.994583 dbus-daemon[1431]: [system] SELinux support is enabled Feb 13 15:10:19.997135 extend-filesystems[1433]: Found loop4 Feb 13 15:10:19.997135 extend-filesystems[1433]: Found loop5 Feb 13 15:10:19.997135 extend-filesystems[1433]: Found vda Feb 13 15:10:19.997135 extend-filesystems[1433]: Found vda1 Feb 13 15:10:19.997135 extend-filesystems[1433]: Found vda2 Feb 13 15:10:19.997135 extend-filesystems[1433]: Found vda3 Feb 13 15:10:19.997135 extend-filesystems[1433]: Found usr Feb 13 15:10:19.997135 extend-filesystems[1433]: Found vda4 Feb 13 15:10:19.997135 extend-filesystems[1433]: Found vda6 Feb 13 15:10:19.997135 extend-filesystems[1433]: Found vda7 Feb 13 15:10:19.997135 extend-filesystems[1433]: Found vda9 Feb 13 15:10:19.997135 extend-filesystems[1433]: Checking size of /dev/vda9 Feb 13 15:10:19.996418 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:10:19.996843 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:10:19.998465 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:10:20.003322 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:10:20.005093 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:10:20.009620 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:10:20.012359 extend-filesystems[1433]: Resized partition /dev/vda9 Feb 13 15:10:20.016094 extend-filesystems[1454]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:10:20.017940 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:10:20.018107 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:10:20.018371 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:10:20.018521 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:10:20.021216 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:10:20.021420 jq[1448]: true Feb 13 15:10:20.022413 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:10:20.022580 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:10:20.027255 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1397) Feb 13 15:10:20.041613 jq[1457]: true Feb 13 15:10:20.047362 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:10:20.051395 tar[1456]: linux-arm64/helm Feb 13 15:10:20.057660 update_engine[1446]: I20250213 15:10:20.057485 1446 main.cc:92] Flatcar Update Engine starting Feb 13 15:10:20.058370 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:10:20.058420 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:10:20.061241 update_engine[1446]: I20250213 15:10:20.059752 1446 update_check_scheduler.cc:74] Next update check in 3m44s Feb 13 15:10:20.060126 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:10:20.060144 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:10:20.061509 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:10:20.072412 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:10:20.081218 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:10:20.094934 extend-filesystems[1454]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:10:20.094934 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:10:20.094934 extend-filesystems[1454]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:10:20.102001 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Feb 13 15:10:20.098760 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:10:20.106039 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:10:20.115970 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:10:20.117253 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:10:20.118989 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:10:20.128189 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:10:20.128741 systemd-logind[1444]: New seat seat0. Feb 13 15:10:20.131369 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:10:20.149887 locksmithd[1471]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:10:20.264680 containerd[1461]: time="2025-02-13T15:10:20.264531042Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:10:20.294937 containerd[1461]: time="2025-02-13T15:10:20.294883448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:10:20.296320 containerd[1461]: time="2025-02-13T15:10:20.296286613Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:10:20.296320 containerd[1461]: time="2025-02-13T15:10:20.296315093Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:10:20.296370 containerd[1461]: time="2025-02-13T15:10:20.296331988Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:10:20.296505 containerd[1461]: time="2025-02-13T15:10:20.296487019Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:10:20.296552 containerd[1461]: time="2025-02-13T15:10:20.296507494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:10:20.296582 containerd[1461]: time="2025-02-13T15:10:20.296565219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:10:20.296606 containerd[1461]: time="2025-02-13T15:10:20.296581349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:10:20.296802 containerd[1461]: time="2025-02-13T15:10:20.296779221Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:10:20.296802 containerd[1461]: time="2025-02-13T15:10:20.296798047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:10:20.296850 containerd[1461]: time="2025-02-13T15:10:20.296812890Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:10:20.296850 containerd[1461]: time="2025-02-13T15:10:20.296822504Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:10:20.296937 containerd[1461]: time="2025-02-13T15:10:20.296918765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:10:20.297130 containerd[1461]: time="2025-02-13T15:10:20.297110965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:10:20.297273 containerd[1461]: time="2025-02-13T15:10:20.297254814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:10:20.297273 containerd[1461]: time="2025-02-13T15:10:20.297272232Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:10:20.297366 containerd[1461]: time="2025-02-13T15:10:20.297347736Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:10:20.297411 containerd[1461]: time="2025-02-13T15:10:20.297396208Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:10:20.301083 containerd[1461]: time="2025-02-13T15:10:20.301052394Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:10:20.301143 containerd[1461]: time="2025-02-13T15:10:20.301126129Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:10:20.301169 containerd[1461]: time="2025-02-13T15:10:20.301146724Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:10:20.301223 containerd[1461]: time="2025-02-13T15:10:20.301168527Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:10:20.301262 containerd[1461]: time="2025-02-13T15:10:20.301228102Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:10:20.301370 containerd[1461]: time="2025-02-13T15:10:20.301352239Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:10:20.301625 containerd[1461]: time="2025-02-13T15:10:20.301604658Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:10:20.301723 containerd[1461]: time="2025-02-13T15:10:20.301706591Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:10:20.301752 containerd[1461]: time="2025-02-13T15:10:20.301725457Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:10:20.301752 containerd[1461]: time="2025-02-13T15:10:20.301739858Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:10:20.301795 containerd[1461]: time="2025-02-13T15:10:20.301752529Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:10:20.301795 containerd[1461]: time="2025-02-13T15:10:20.301766045Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:10:20.301795 containerd[1461]: time="2025-02-13T15:10:20.301787686Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:10:20.301853 containerd[1461]: time="2025-02-13T15:10:20.301801162Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:10:20.301853 containerd[1461]: time="2025-02-13T15:10:20.301815281Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:10:20.301853 containerd[1461]: time="2025-02-13T15:10:20.301827309Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:10:20.301853 containerd[1461]: time="2025-02-13T15:10:20.301839176Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:10:20.301853 containerd[1461]: time="2025-02-13T15:10:20.301851324Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:10:20.301932 containerd[1461]: time="2025-02-13T15:10:20.301870592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:10:20.301932 containerd[1461]: time="2025-02-13T15:10:20.301884148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:10:20.301932 containerd[1461]: time="2025-02-13T15:10:20.301895935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:10:20.301932 containerd[1461]: time="2025-02-13T15:10:20.301907479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:10:20.301932 containerd[1461]: time="2025-02-13T15:10:20.301921076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:10:20.302016 containerd[1461]: time="2025-02-13T15:10:20.301933747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:10:20.302016 containerd[1461]: time="2025-02-13T15:10:20.301945855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:10:20.302016 containerd[1461]: time="2025-02-13T15:10:20.301958365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:10:20.302016 containerd[1461]: time="2025-02-13T15:10:20.301970795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:10:20.302016 containerd[1461]: time="2025-02-13T15:10:20.301985236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:10:20.302016 containerd[1461]: time="2025-02-13T15:10:20.301996339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:10:20.302016 containerd[1461]: time="2025-02-13T15:10:20.302007884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:10:20.302125 containerd[1461]: time="2025-02-13T15:10:20.302027192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:10:20.302125 containerd[1461]: time="2025-02-13T15:10:20.302043162Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:10:20.302125 containerd[1461]: time="2025-02-13T15:10:20.302063597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:10:20.302125 containerd[1461]: time="2025-02-13T15:10:20.302078199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:10:20.302125 containerd[1461]: time="2025-02-13T15:10:20.302088979Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:10:20.302860 containerd[1461]: time="2025-02-13T15:10:20.302831794Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:10:20.302892 containerd[1461]: time="2025-02-13T15:10:20.302864135Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:10:20.302892 containerd[1461]: time="2025-02-13T15:10:20.302875358Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:10:20.302928 containerd[1461]: time="2025-02-13T15:10:20.302889277Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:10:20.302928 containerd[1461]: time="2025-02-13T15:10:20.302920331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:10:20.302973 containerd[1461]: time="2025-02-13T15:10:20.302936261Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:10:20.302973 containerd[1461]: time="2025-02-13T15:10:20.302946840Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:10:20.302973 containerd[1461]: time="2025-02-13T15:10:20.302957379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:10:20.303339 containerd[1461]: time="2025-02-13T15:10:20.303291256Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:10:20.303447 containerd[1461]: time="2025-02-13T15:10:20.303343308Z" level=info msg="Connect containerd service" Feb 13 15:10:20.303447 containerd[1461]: time="2025-02-13T15:10:20.303372955Z" level=info msg="using legacy CRI server" Feb 13 15:10:20.303447 containerd[1461]: time="2025-02-13T15:10:20.303380195Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:10:20.303622 containerd[1461]: time="2025-02-13T15:10:20.303604255Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:10:20.304376 containerd[1461]: time="2025-02-13T15:10:20.304347230Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:10:20.304571 containerd[1461]: time="2025-02-13T15:10:20.304538424Z" level=info msg="Start subscribing containerd event" Feb 13 15:10:20.304598 containerd[1461]: time="2025-02-13T15:10:20.304586213Z" level=info msg="Start recovering state" Feb 13 15:10:20.305193 containerd[1461]: time="2025-02-13T15:10:20.305141010Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:10:20.305268 containerd[1461]: time="2025-02-13T15:10:20.305250787Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:10:20.306695 containerd[1461]: time="2025-02-13T15:10:20.306670405Z" level=info msg="Start event monitor" Feb 13 15:10:20.306735 containerd[1461]: time="2025-02-13T15:10:20.306698684Z" level=info msg="Start snapshots syncer" Feb 13 15:10:20.306735 containerd[1461]: time="2025-02-13T15:10:20.306711435Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:10:20.306735 containerd[1461]: time="2025-02-13T15:10:20.306726158Z" level=info msg="Start streaming server" Feb 13 15:10:20.306964 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:10:20.308405 containerd[1461]: time="2025-02-13T15:10:20.308379569Z" level=info msg="containerd successfully booted in 0.045374s" Feb 13 15:10:20.414173 tar[1456]: linux-arm64/LICENSE Feb 13 15:10:20.414173 tar[1456]: linux-arm64/README.md Feb 13 15:10:20.430714 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:10:20.662203 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:10:20.682242 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:10:20.696507 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:10:20.701403 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:10:20.702274 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:10:20.704750 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:10:20.714609 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:10:20.717270 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:10:20.719273 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:10:20.720545 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:10:20.973975 systemd-networkd[1399]: eth0: Gained IPv6LL Feb 13 15:10:20.977301 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:10:20.979059 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:10:20.990423 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:10:20.992759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:10:20.994882 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:10:21.010415 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:10:21.010665 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:10:21.012359 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:10:21.015986 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:10:21.484964 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:10:21.486755 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:10:21.488405 (kubelet)[1544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:10:21.491270 systemd[1]: Startup finished in 647ms (kernel) + 6.259s (initrd) + 3.497s (userspace) = 10.405s. Feb 13 15:10:21.929263 kubelet[1544]: E0213 15:10:21.929144 1544 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:10:21.931846 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:10:21.932005 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:10:21.932490 systemd[1]: kubelet.service: Consumed 793ms CPU time, 233.1M memory peak. Feb 13 15:10:24.987117 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:10:24.996544 systemd[1]: Started sshd@0-10.0.0.39:22-10.0.0.1:54712.service - OpenSSH per-connection server daemon (10.0.0.1:54712). Feb 13 15:10:25.075813 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 54712 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:10:25.077898 sshd-session[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:25.090084 systemd-logind[1444]: New session 1 of user core. Feb 13 15:10:25.091117 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:10:25.100483 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:10:25.110355 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:10:25.114504 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:10:25.120443 (systemd)[1561]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:10:25.122772 systemd-logind[1444]: New session c1 of user core. Feb 13 15:10:25.229903 systemd[1561]: Queued start job for default target default.target. Feb 13 15:10:25.240279 systemd[1561]: Created slice app.slice - User Application Slice. Feb 13 15:10:25.240309 systemd[1561]: Reached target paths.target - Paths. Feb 13 15:10:25.240348 systemd[1561]: Reached target timers.target - Timers. Feb 13 15:10:25.241649 systemd[1561]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:10:25.252031 systemd[1561]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:10:25.252101 systemd[1561]: Reached target sockets.target - Sockets. Feb 13 15:10:25.252149 systemd[1561]: Reached target basic.target - Basic System. Feb 13 15:10:25.252183 systemd[1561]: Reached target default.target - Main User Target. Feb 13 15:10:25.252224 systemd[1561]: Startup finished in 123ms. Feb 13 15:10:25.252443 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:10:25.254240 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:10:25.317653 systemd[1]: Started sshd@1-10.0.0.39:22-10.0.0.1:54720.service - OpenSSH per-connection server daemon (10.0.0.1:54720). Feb 13 15:10:25.375153 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 54720 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:10:25.376503 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:25.381247 systemd-logind[1444]: New session 2 of user core. Feb 13 15:10:25.391412 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:10:25.445558 sshd[1574]: Connection closed by 10.0.0.1 port 54720 Feb 13 15:10:25.446044 sshd-session[1572]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:25.463149 systemd[1]: sshd@1-10.0.0.39:22-10.0.0.1:54720.service: Deactivated successfully. Feb 13 15:10:25.468465 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:10:25.476052 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:10:25.486542 systemd[1]: Started sshd@2-10.0.0.39:22-10.0.0.1:54724.service - OpenSSH per-connection server daemon (10.0.0.1:54724). Feb 13 15:10:25.487119 systemd-logind[1444]: Removed session 2. Feb 13 15:10:25.534251 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 54724 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:10:25.535398 sshd-session[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:25.540043 systemd-logind[1444]: New session 3 of user core. Feb 13 15:10:25.550432 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:10:25.599389 sshd[1582]: Connection closed by 10.0.0.1 port 54724 Feb 13 15:10:25.599764 sshd-session[1579]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:25.616109 systemd[1]: sshd@2-10.0.0.39:22-10.0.0.1:54724.service: Deactivated successfully. Feb 13 15:10:25.618060 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:10:25.625369 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:10:25.637976 systemd[1]: Started sshd@3-10.0.0.39:22-10.0.0.1:54738.service - OpenSSH per-connection server daemon (10.0.0.1:54738). Feb 13 15:10:25.639969 systemd-logind[1444]: Removed session 3. Feb 13 15:10:25.683450 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 54738 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:10:25.684815 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:25.690141 systemd-logind[1444]: New session 4 of user core. Feb 13 15:10:25.707446 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:10:25.761394 sshd[1590]: Connection closed by 10.0.0.1 port 54738 Feb 13 15:10:25.763233 sshd-session[1587]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:25.779863 systemd[1]: sshd@3-10.0.0.39:22-10.0.0.1:54738.service: Deactivated successfully. Feb 13 15:10:25.782406 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:10:25.783099 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:10:25.793559 systemd[1]: Started sshd@4-10.0.0.39:22-10.0.0.1:54750.service - OpenSSH per-connection server daemon (10.0.0.1:54750). Feb 13 15:10:25.794456 systemd-logind[1444]: Removed session 4. Feb 13 15:10:25.858271 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 54750 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:10:25.859969 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:25.879899 systemd-logind[1444]: New session 5 of user core. Feb 13 15:10:25.892420 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:10:25.958436 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:10:25.962268 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:10:26.351447 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:10:26.351521 (dockerd)[1619]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:10:26.624600 dockerd[1619]: time="2025-02-13T15:10:26.624482108Z" level=info msg="Starting up" Feb 13 15:10:26.838630 dockerd[1619]: time="2025-02-13T15:10:26.838579586Z" level=info msg="Loading containers: start." Feb 13 15:10:27.003625 kernel: Initializing XFRM netlink socket Feb 13 15:10:27.070243 systemd-networkd[1399]: docker0: Link UP Feb 13 15:10:27.115405 dockerd[1619]: time="2025-02-13T15:10:27.115365975Z" level=info msg="Loading containers: done." Feb 13 15:10:27.134633 dockerd[1619]: time="2025-02-13T15:10:27.134068247Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:10:27.134633 dockerd[1619]: time="2025-02-13T15:10:27.134171497Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:10:27.134633 dockerd[1619]: time="2025-02-13T15:10:27.134369286Z" level=info msg="Daemon has completed initialization" Feb 13 15:10:27.178636 dockerd[1619]: time="2025-02-13T15:10:27.178560473Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:10:27.178821 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:10:27.872743 containerd[1461]: time="2025-02-13T15:10:27.872685235Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 15:10:28.678784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1064098587.mount: Deactivated successfully. Feb 13 15:10:29.802822 containerd[1461]: time="2025-02-13T15:10:29.802759844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:29.803769 containerd[1461]: time="2025-02-13T15:10:29.803711964Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620377" Feb 13 15:10:29.804558 containerd[1461]: time="2025-02-13T15:10:29.804522478Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:29.808218 containerd[1461]: time="2025-02-13T15:10:29.808155488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:29.810061 containerd[1461]: time="2025-02-13T15:10:29.809886543Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 1.937152778s" Feb 13 15:10:29.810061 containerd[1461]: time="2025-02-13T15:10:29.809924382Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 15:10:29.810681 containerd[1461]: time="2025-02-13T15:10:29.810658857Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 15:10:30.929164 containerd[1461]: time="2025-02-13T15:10:30.929113870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:30.930146 containerd[1461]: time="2025-02-13T15:10:30.930108610Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471775" Feb 13 15:10:30.930834 containerd[1461]: time="2025-02-13T15:10:30.930789742Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:30.933600 containerd[1461]: time="2025-02-13T15:10:30.933573128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:30.935668 containerd[1461]: time="2025-02-13T15:10:30.935631530Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 1.124939932s" Feb 13 15:10:30.935851 containerd[1461]: time="2025-02-13T15:10:30.935749760Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 15:10:30.936335 containerd[1461]: time="2025-02-13T15:10:30.936313225Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 15:10:32.051980 containerd[1461]: time="2025-02-13T15:10:32.051922464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:32.053522 containerd[1461]: time="2025-02-13T15:10:32.053465874Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024542" Feb 13 15:10:32.054471 containerd[1461]: time="2025-02-13T15:10:32.054437719Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:32.057354 containerd[1461]: time="2025-02-13T15:10:32.057313872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:32.058367 containerd[1461]: time="2025-02-13T15:10:32.058331797Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.121988085s" Feb 13 15:10:32.058401 containerd[1461]: time="2025-02-13T15:10:32.058375831Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 15:10:32.058904 containerd[1461]: time="2025-02-13T15:10:32.058870877Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 15:10:32.182465 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:10:32.191391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:10:32.288174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:10:32.291334 (kubelet)[1887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:10:32.326026 kubelet[1887]: E0213 15:10:32.325910 1887 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:10:32.329080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:10:32.329249 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:10:32.329681 systemd[1]: kubelet.service: Consumed 121ms CPU time, 97M memory peak. Feb 13 15:10:33.335443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount238776842.mount: Deactivated successfully. Feb 13 15:10:33.696396 containerd[1461]: time="2025-02-13T15:10:33.696274887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:33.696950 containerd[1461]: time="2025-02-13T15:10:33.696906666Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769258" Feb 13 15:10:33.697671 containerd[1461]: time="2025-02-13T15:10:33.697632354Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:33.699891 containerd[1461]: time="2025-02-13T15:10:33.699864031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:33.700681 containerd[1461]: time="2025-02-13T15:10:33.700522756Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.641622885s" Feb 13 15:10:33.700681 containerd[1461]: time="2025-02-13T15:10:33.700555957Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 15:10:33.701085 containerd[1461]: time="2025-02-13T15:10:33.701059624Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:10:34.323705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1976225176.mount: Deactivated successfully. Feb 13 15:10:34.942485 containerd[1461]: time="2025-02-13T15:10:34.942431427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:34.947604 containerd[1461]: time="2025-02-13T15:10:34.947554730Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 15:10:34.948654 containerd[1461]: time="2025-02-13T15:10:34.948622650Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:34.951877 containerd[1461]: time="2025-02-13T15:10:34.951842044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:34.953953 containerd[1461]: time="2025-02-13T15:10:34.953925884Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.252831136s" Feb 13 15:10:34.954986 containerd[1461]: time="2025-02-13T15:10:34.953957396Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:10:34.955367 containerd[1461]: time="2025-02-13T15:10:34.955341918Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 15:10:35.426939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1481124916.mount: Deactivated successfully. Feb 13 15:10:35.431066 containerd[1461]: time="2025-02-13T15:10:35.431021473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:35.431803 containerd[1461]: time="2025-02-13T15:10:35.431757289Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 15:10:35.432619 containerd[1461]: time="2025-02-13T15:10:35.432557964Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:35.435168 containerd[1461]: time="2025-02-13T15:10:35.435117285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:35.436058 containerd[1461]: time="2025-02-13T15:10:35.435903569Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 480.530581ms" Feb 13 15:10:35.436058 containerd[1461]: time="2025-02-13T15:10:35.435949788Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 15:10:35.436462 containerd[1461]: time="2025-02-13T15:10:35.436420476Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 15:10:35.969150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount251069301.mount: Deactivated successfully. Feb 13 15:10:37.541245 containerd[1461]: time="2025-02-13T15:10:37.541072557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:37.541627 containerd[1461]: time="2025-02-13T15:10:37.541548373Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Feb 13 15:10:37.542687 containerd[1461]: time="2025-02-13T15:10:37.542658343Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:37.545666 containerd[1461]: time="2025-02-13T15:10:37.545622163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:37.548134 containerd[1461]: time="2025-02-13T15:10:37.548083879Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.111627046s" Feb 13 15:10:37.548134 containerd[1461]: time="2025-02-13T15:10:37.548125357Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 15:10:42.144423 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:10:42.145067 systemd[1]: kubelet.service: Consumed 121ms CPU time, 97M memory peak. Feb 13 15:10:42.156417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:10:42.183736 systemd[1]: Reload requested from client PID 2034 ('systemctl') (unit session-5.scope)... Feb 13 15:10:42.183753 systemd[1]: Reloading... Feb 13 15:10:42.256221 zram_generator::config[2082]: No configuration found. Feb 13 15:10:42.378250 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:10:42.450276 systemd[1]: Reloading finished in 266 ms. Feb 13 15:10:42.484423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:10:42.487426 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:10:42.488631 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:10:42.488841 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:10:42.488887 systemd[1]: kubelet.service: Consumed 81ms CPU time, 82.3M memory peak. Feb 13 15:10:42.492427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:10:42.606232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:10:42.610738 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:10:42.651437 kubelet[2125]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:10:42.651437 kubelet[2125]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:10:42.651437 kubelet[2125]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:10:42.651849 kubelet[2125]: I0213 15:10:42.651764 2125 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:10:43.330528 kubelet[2125]: I0213 15:10:43.330477 2125 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:10:43.330528 kubelet[2125]: I0213 15:10:43.330518 2125 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:10:43.330819 kubelet[2125]: I0213 15:10:43.330792 2125 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:10:43.365908 kubelet[2125]: E0213 15:10:43.365866 2125 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:10:43.368815 kubelet[2125]: I0213 15:10:43.368788 2125 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:10:43.382559 kubelet[2125]: E0213 15:10:43.382512 2125 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:10:43.382559 kubelet[2125]: I0213 15:10:43.382551 2125 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:10:43.388225 kubelet[2125]: I0213 15:10:43.388199 2125 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:10:43.389137 kubelet[2125]: I0213 15:10:43.389056 2125 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:10:43.389279 kubelet[2125]: I0213 15:10:43.389220 2125 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:10:43.389463 kubelet[2125]: I0213 15:10:43.389268 2125 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:10:43.389609 kubelet[2125]: I0213 15:10:43.389589 2125 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:10:43.389609 kubelet[2125]: I0213 15:10:43.389601 2125 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:10:43.389804 kubelet[2125]: I0213 15:10:43.389785 2125 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:10:43.391508 kubelet[2125]: I0213 15:10:43.391477 2125 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:10:43.391508 kubelet[2125]: I0213 15:10:43.391508 2125 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:10:43.391575 kubelet[2125]: I0213 15:10:43.391538 2125 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:10:43.391575 kubelet[2125]: I0213 15:10:43.391549 2125 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:10:43.397932 kubelet[2125]: I0213 15:10:43.397832 2125 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:10:43.399069 kubelet[2125]: W0213 15:10:43.399028 2125 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Feb 13 15:10:43.399171 kubelet[2125]: E0213 15:10:43.399084 2125 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:10:43.399988 kubelet[2125]: I0213 15:10:43.399952 2125 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:10:43.400123 kubelet[2125]: W0213 15:10:43.399940 2125 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Feb 13 15:10:43.400123 kubelet[2125]: E0213 15:10:43.400080 2125 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:10:43.400781 kubelet[2125]: W0213 15:10:43.400744 2125 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:10:43.401505 kubelet[2125]: I0213 15:10:43.401485 2125 server.go:1269] "Started kubelet" Feb 13 15:10:43.402279 kubelet[2125]: I0213 15:10:43.401674 2125 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:10:43.403160 kubelet[2125]: I0213 15:10:43.402748 2125 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:10:43.403160 kubelet[2125]: I0213 15:10:43.403031 2125 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:10:43.403160 kubelet[2125]: I0213 15:10:43.403119 2125 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:10:43.403831 kubelet[2125]: I0213 15:10:43.403801 2125 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:10:43.404054 kubelet[2125]: I0213 15:10:43.404005 2125 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:10:43.405743 kubelet[2125]: I0213 15:10:43.404458 2125 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:10:43.405743 kubelet[2125]: I0213 15:10:43.404570 2125 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:10:43.405743 kubelet[2125]: I0213 15:10:43.404623 2125 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:10:43.405743 kubelet[2125]: W0213 15:10:43.404895 2125 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Feb 13 15:10:43.405743 kubelet[2125]: E0213 15:10:43.404933 2125 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:10:43.405743 kubelet[2125]: E0213 15:10:43.405672 2125 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:10:43.405933 kubelet[2125]: E0213 15:10:43.405762 2125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="200ms" Feb 13 15:10:43.406683 kubelet[2125]: E0213 15:10:43.406661 2125 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:10:43.408660 kubelet[2125]: I0213 15:10:43.408549 2125 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:10:43.408660 kubelet[2125]: I0213 15:10:43.408567 2125 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:10:43.408660 kubelet[2125]: E0213 15:10:43.407680 2125 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.39:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.39:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cd25d12475ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:10:43.401455082 +0000 UTC m=+0.787754995,LastTimestamp:2025-02-13 15:10:43.401455082 +0000 UTC m=+0.787754995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:10:43.408660 kubelet[2125]: I0213 15:10:43.408654 2125 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:10:43.420259 kubelet[2125]: I0213 15:10:43.420220 2125 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:10:43.420259 kubelet[2125]: I0213 15:10:43.420241 2125 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:10:43.420259 kubelet[2125]: I0213 15:10:43.420259 2125 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:10:43.422387 kubelet[2125]: I0213 15:10:43.422322 2125 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:10:43.423651 kubelet[2125]: I0213 15:10:43.423391 2125 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:10:43.423651 kubelet[2125]: I0213 15:10:43.423422 2125 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:10:43.423651 kubelet[2125]: I0213 15:10:43.423441 2125 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:10:43.423651 kubelet[2125]: E0213 15:10:43.423482 2125 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:10:43.506814 kubelet[2125]: E0213 15:10:43.506741 2125 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:10:43.524063 kubelet[2125]: E0213 15:10:43.524033 2125 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:10:43.561142 kubelet[2125]: I0213 15:10:43.561093 2125 policy_none.go:49] "None policy: Start" Feb 13 15:10:43.561444 kubelet[2125]: W0213 15:10:43.561384 2125 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Feb 13 15:10:43.561486 kubelet[2125]: E0213 15:10:43.561450 2125 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:10:43.562215 kubelet[2125]: I0213 15:10:43.562199 2125 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:10:43.562250 kubelet[2125]: I0213 15:10:43.562223 2125 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:10:43.568537 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:10:43.584017 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:10:43.587398 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:10:43.597307 kubelet[2125]: I0213 15:10:43.597038 2125 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:10:43.597307 kubelet[2125]: I0213 15:10:43.597288 2125 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:10:43.597431 kubelet[2125]: I0213 15:10:43.597305 2125 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:10:43.597865 kubelet[2125]: I0213 15:10:43.597584 2125 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:10:43.599528 kubelet[2125]: E0213 15:10:43.599458 2125 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:10:43.607019 kubelet[2125]: E0213 15:10:43.606977 2125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="400ms" Feb 13 15:10:43.698964 kubelet[2125]: I0213 15:10:43.698934 2125 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:10:43.699361 kubelet[2125]: E0213 15:10:43.699330 2125 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Feb 13 15:10:43.732482 systemd[1]: Created slice kubepods-burstable-pod2a251915ff17ce9baaa0d10edcd5b646.slice - libcontainer container kubepods-burstable-pod2a251915ff17ce9baaa0d10edcd5b646.slice. Feb 13 15:10:43.750866 systemd[1]: Created slice kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice - libcontainer container kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice. Feb 13 15:10:43.760337 systemd[1]: Created slice kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice - libcontainer container kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice. Feb 13 15:10:43.806937 kubelet[2125]: I0213 15:10:43.806889 2125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:43.806937 kubelet[2125]: I0213 15:10:43.806936 2125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:10:43.807114 kubelet[2125]: I0213 15:10:43.806957 2125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a251915ff17ce9baaa0d10edcd5b646-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2a251915ff17ce9baaa0d10edcd5b646\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:10:43.807114 kubelet[2125]: I0213 15:10:43.806972 2125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a251915ff17ce9baaa0d10edcd5b646-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2a251915ff17ce9baaa0d10edcd5b646\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:10:43.807114 kubelet[2125]: I0213 15:10:43.806988 2125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:43.807114 kubelet[2125]: I0213 15:10:43.807002 2125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:43.807114 kubelet[2125]: I0213 15:10:43.807018 2125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a251915ff17ce9baaa0d10edcd5b646-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2a251915ff17ce9baaa0d10edcd5b646\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:10:43.807249 kubelet[2125]: I0213 15:10:43.807034 2125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:43.807249 kubelet[2125]: I0213 15:10:43.807048 2125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:43.900583 kubelet[2125]: I0213 15:10:43.900431 2125 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:10:43.900829 kubelet[2125]: E0213 15:10:43.900782 2125 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Feb 13 15:10:44.007827 kubelet[2125]: E0213 15:10:44.007774 2125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="800ms" Feb 13 15:10:44.049931 containerd[1461]: time="2025-02-13T15:10:44.049877997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2a251915ff17ce9baaa0d10edcd5b646,Namespace:kube-system,Attempt:0,}" Feb 13 15:10:44.059803 containerd[1461]: time="2025-02-13T15:10:44.059756325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,}" Feb 13 15:10:44.063422 containerd[1461]: time="2025-02-13T15:10:44.063291324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,}" Feb 13 15:10:44.150267 kubelet[2125]: E0213 15:10:44.150104 2125 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.39:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.39:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cd25d12475ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:10:43.401455082 +0000 UTC m=+0.787754995,LastTimestamp:2025-02-13 15:10:43.401455082 +0000 UTC m=+0.787754995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:10:44.302862 kubelet[2125]: I0213 15:10:44.302744 2125 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:10:44.303149 kubelet[2125]: E0213 15:10:44.303070 2125 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Feb 13 15:10:44.493617 kubelet[2125]: W0213 15:10:44.493542 2125 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Feb 13 15:10:44.493617 kubelet[2125]: E0213 15:10:44.493616 2125 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:10:44.494890 kubelet[2125]: W0213 15:10:44.494851 2125 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Feb 13 15:10:44.494890 kubelet[2125]: E0213 15:10:44.494877 2125 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:10:44.515766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067195277.mount: Deactivated successfully. Feb 13 15:10:44.522431 containerd[1461]: time="2025-02-13T15:10:44.522381761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:10:44.523255 containerd[1461]: time="2025-02-13T15:10:44.523213398Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 15:10:44.524287 containerd[1461]: time="2025-02-13T15:10:44.524186165Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:10:44.527068 containerd[1461]: time="2025-02-13T15:10:44.526977112Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:10:44.528134 containerd[1461]: time="2025-02-13T15:10:44.528045994Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:10:44.529138 containerd[1461]: time="2025-02-13T15:10:44.529105825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:10:44.530222 containerd[1461]: time="2025-02-13T15:10:44.530175828Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 480.216253ms" Feb 13 15:10:44.530623 containerd[1461]: time="2025-02-13T15:10:44.530582076Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:10:44.531002 containerd[1461]: time="2025-02-13T15:10:44.530974987Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:10:44.533717 containerd[1461]: time="2025-02-13T15:10:44.533639783Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 473.806486ms" Feb 13 15:10:44.536068 containerd[1461]: time="2025-02-13T15:10:44.536006181Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 472.653823ms" Feb 13 15:10:44.667069 containerd[1461]: time="2025-02-13T15:10:44.666766084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:10:44.667069 containerd[1461]: time="2025-02-13T15:10:44.666846060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:10:44.667069 containerd[1461]: time="2025-02-13T15:10:44.666868206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:44.667069 containerd[1461]: time="2025-02-13T15:10:44.666950865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:44.667674 containerd[1461]: time="2025-02-13T15:10:44.667232243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:10:44.667674 containerd[1461]: time="2025-02-13T15:10:44.667308815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:10:44.667674 containerd[1461]: time="2025-02-13T15:10:44.667324914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:44.668349 containerd[1461]: time="2025-02-13T15:10:44.668227917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:44.668491 containerd[1461]: time="2025-02-13T15:10:44.668250144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:10:44.668544 containerd[1461]: time="2025-02-13T15:10:44.668475174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:10:44.668588 containerd[1461]: time="2025-02-13T15:10:44.668500764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:44.670272 containerd[1461]: time="2025-02-13T15:10:44.669996558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:44.696398 systemd[1]: Started cri-containerd-b47ae3ee5c79c1d774f24a2e825775e5e72f06ae3fe292ed16cd095d510ae573.scope - libcontainer container b47ae3ee5c79c1d774f24a2e825775e5e72f06ae3fe292ed16cd095d510ae573. Feb 13 15:10:44.699821 systemd[1]: Started cri-containerd-8006c147f57b0ade8e0a73486a5f70c4dd8c5c595cda35969f8caef30125f08d.scope - libcontainer container 8006c147f57b0ade8e0a73486a5f70c4dd8c5c595cda35969f8caef30125f08d. Feb 13 15:10:44.701239 systemd[1]: Started cri-containerd-b257a5d62e9e1730cbfdd375ea913742cf331500625fa85fe4badebd99890347.scope - libcontainer container b257a5d62e9e1730cbfdd375ea913742cf331500625fa85fe4badebd99890347. Feb 13 15:10:44.729607 containerd[1461]: time="2025-02-13T15:10:44.728943054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2a251915ff17ce9baaa0d10edcd5b646,Namespace:kube-system,Attempt:0,} returns sandbox id \"b47ae3ee5c79c1d774f24a2e825775e5e72f06ae3fe292ed16cd095d510ae573\"" Feb 13 15:10:44.733856 containerd[1461]: time="2025-02-13T15:10:44.733526992Z" level=info msg="CreateContainer within sandbox \"b47ae3ee5c79c1d774f24a2e825775e5e72f06ae3fe292ed16cd095d510ae573\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:10:44.738209 containerd[1461]: time="2025-02-13T15:10:44.738165635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8006c147f57b0ade8e0a73486a5f70c4dd8c5c595cda35969f8caef30125f08d\"" Feb 13 15:10:44.740812 containerd[1461]: time="2025-02-13T15:10:44.740785216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b257a5d62e9e1730cbfdd375ea913742cf331500625fa85fe4badebd99890347\"" Feb 13 15:10:44.741001 containerd[1461]: time="2025-02-13T15:10:44.740969317Z" level=info msg="CreateContainer within sandbox \"8006c147f57b0ade8e0a73486a5f70c4dd8c5c595cda35969f8caef30125f08d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:10:44.743113 containerd[1461]: time="2025-02-13T15:10:44.743082892Z" level=info msg="CreateContainer within sandbox \"b257a5d62e9e1730cbfdd375ea913742cf331500625fa85fe4badebd99890347\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:10:44.758012 containerd[1461]: time="2025-02-13T15:10:44.757940912Z" level=info msg="CreateContainer within sandbox \"b47ae3ee5c79c1d774f24a2e825775e5e72f06ae3fe292ed16cd095d510ae573\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a77c594a71bcea80e4918c1ad938430bc77a45536fdb23675199c5ad09386ce0\"" Feb 13 15:10:44.758729 containerd[1461]: time="2025-02-13T15:10:44.758704147Z" level=info msg="StartContainer for \"a77c594a71bcea80e4918c1ad938430bc77a45536fdb23675199c5ad09386ce0\"" Feb 13 15:10:44.760072 containerd[1461]: time="2025-02-13T15:10:44.760039869Z" level=info msg="CreateContainer within sandbox \"8006c147f57b0ade8e0a73486a5f70c4dd8c5c595cda35969f8caef30125f08d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"918c3838c1e79ef7182ed38d73fa6a2ea40a46c81b3674820a29e0da56247266\"" Feb 13 15:10:44.760543 containerd[1461]: time="2025-02-13T15:10:44.760515239Z" level=info msg="StartContainer for \"918c3838c1e79ef7182ed38d73fa6a2ea40a46c81b3674820a29e0da56247266\"" Feb 13 15:10:44.762775 containerd[1461]: time="2025-02-13T15:10:44.762723528Z" level=info msg="CreateContainer within sandbox \"b257a5d62e9e1730cbfdd375ea913742cf331500625fa85fe4badebd99890347\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"632789b1ce41236668f04013deba05b245f419a93dd09be204855a79483cbab9\"" Feb 13 15:10:44.763437 containerd[1461]: time="2025-02-13T15:10:44.763223928Z" level=info msg="StartContainer for \"632789b1ce41236668f04013deba05b245f419a93dd09be204855a79483cbab9\"" Feb 13 15:10:44.786368 systemd[1]: Started cri-containerd-918c3838c1e79ef7182ed38d73fa6a2ea40a46c81b3674820a29e0da56247266.scope - libcontainer container 918c3838c1e79ef7182ed38d73fa6a2ea40a46c81b3674820a29e0da56247266. Feb 13 15:10:44.789938 systemd[1]: Started cri-containerd-632789b1ce41236668f04013deba05b245f419a93dd09be204855a79483cbab9.scope - libcontainer container 632789b1ce41236668f04013deba05b245f419a93dd09be204855a79483cbab9. Feb 13 15:10:44.790914 systemd[1]: Started cri-containerd-a77c594a71bcea80e4918c1ad938430bc77a45536fdb23675199c5ad09386ce0.scope - libcontainer container a77c594a71bcea80e4918c1ad938430bc77a45536fdb23675199c5ad09386ce0. Feb 13 15:10:44.808356 kubelet[2125]: E0213 15:10:44.808307 2125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="1.6s" Feb 13 15:10:44.825852 containerd[1461]: time="2025-02-13T15:10:44.825808026Z" level=info msg="StartContainer for \"918c3838c1e79ef7182ed38d73fa6a2ea40a46c81b3674820a29e0da56247266\" returns successfully" Feb 13 15:10:44.844471 containerd[1461]: time="2025-02-13T15:10:44.844427837Z" level=info msg="StartContainer for \"a77c594a71bcea80e4918c1ad938430bc77a45536fdb23675199c5ad09386ce0\" returns successfully" Feb 13 15:10:44.844692 containerd[1461]: time="2025-02-13T15:10:44.844427917Z" level=info msg="StartContainer for \"632789b1ce41236668f04013deba05b245f419a93dd09be204855a79483cbab9\" returns successfully" Feb 13 15:10:44.940443 kubelet[2125]: W0213 15:10:44.940269 2125 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Feb 13 15:10:44.940443 kubelet[2125]: E0213 15:10:44.940337 2125 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:10:44.942161 kubelet[2125]: W0213 15:10:44.942088 2125 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Feb 13 15:10:44.942161 kubelet[2125]: E0213 15:10:44.942132 2125 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:10:45.105155 kubelet[2125]: I0213 15:10:45.105086 2125 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:10:46.352999 kubelet[2125]: I0213 15:10:46.352948 2125 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 15:10:46.352999 kubelet[2125]: E0213 15:10:46.352984 2125 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 15:10:46.369284 kubelet[2125]: E0213 15:10:46.369161 2125 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:10:46.469359 kubelet[2125]: E0213 15:10:46.469320 2125 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Feb 13 15:10:46.469593 kubelet[2125]: E0213 15:10:46.469558 2125 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:10:47.400389 kubelet[2125]: I0213 15:10:47.400348 2125 apiserver.go:52] "Watching apiserver" Feb 13 15:10:47.405710 kubelet[2125]: I0213 15:10:47.405649 2125 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:10:48.599147 systemd[1]: Reload requested from client PID 2402 ('systemctl') (unit session-5.scope)... Feb 13 15:10:48.599162 systemd[1]: Reloading... Feb 13 15:10:48.679231 zram_generator::config[2446]: No configuration found. Feb 13 15:10:48.772152 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:10:48.860586 systemd[1]: Reloading finished in 260 ms. Feb 13 15:10:48.886998 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:10:48.899433 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:10:48.899692 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:10:48.899757 systemd[1]: kubelet.service: Consumed 1.134s CPU time, 118.7M memory peak. Feb 13 15:10:48.916582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:10:49.016726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:10:49.022268 (kubelet)[2488]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:10:49.068658 kubelet[2488]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:10:49.068658 kubelet[2488]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:10:49.068658 kubelet[2488]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:10:49.068998 kubelet[2488]: I0213 15:10:49.068701 2488 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:10:49.076014 kubelet[2488]: I0213 15:10:49.075974 2488 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:10:49.076014 kubelet[2488]: I0213 15:10:49.076004 2488 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:10:49.076541 kubelet[2488]: I0213 15:10:49.076499 2488 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:10:49.077853 kubelet[2488]: I0213 15:10:49.077831 2488 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:10:49.079898 kubelet[2488]: I0213 15:10:49.079789 2488 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:10:49.082667 kubelet[2488]: E0213 15:10:49.082635 2488 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:10:49.082733 kubelet[2488]: I0213 15:10:49.082669 2488 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:10:49.084739 kubelet[2488]: I0213 15:10:49.084723 2488 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:10:49.084833 kubelet[2488]: I0213 15:10:49.084822 2488 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:10:49.084942 kubelet[2488]: I0213 15:10:49.084922 2488 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:10:49.085082 kubelet[2488]: I0213 15:10:49.084944 2488 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:10:49.085146 kubelet[2488]: I0213 15:10:49.085092 2488 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:10:49.085146 kubelet[2488]: I0213 15:10:49.085101 2488 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:10:49.085146 kubelet[2488]: I0213 15:10:49.085128 2488 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:10:49.085258 kubelet[2488]: I0213 15:10:49.085248 2488 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:10:49.085282 kubelet[2488]: I0213 15:10:49.085262 2488 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:10:49.085303 kubelet[2488]: I0213 15:10:49.085282 2488 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:10:49.085303 kubelet[2488]: I0213 15:10:49.085292 2488 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:10:49.086496 kubelet[2488]: I0213 15:10:49.086471 2488 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:10:49.090243 kubelet[2488]: I0213 15:10:49.089809 2488 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:10:49.090320 kubelet[2488]: I0213 15:10:49.090272 2488 server.go:1269] "Started kubelet" Feb 13 15:10:49.090621 kubelet[2488]: I0213 15:10:49.090572 2488 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:10:49.092027 kubelet[2488]: I0213 15:10:49.092009 2488 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:10:49.093304 kubelet[2488]: I0213 15:10:49.093246 2488 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:10:49.093539 kubelet[2488]: I0213 15:10:49.093515 2488 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:10:49.093794 kubelet[2488]: I0213 15:10:49.093761 2488 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:10:49.094523 kubelet[2488]: I0213 15:10:49.094493 2488 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:10:49.094993 kubelet[2488]: I0213 15:10:49.094660 2488 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:10:49.095735 kubelet[2488]: I0213 15:10:49.095690 2488 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:10:49.095916 kubelet[2488]: E0213 15:10:49.095893 2488 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:10:49.095994 kubelet[2488]: I0213 15:10:49.095973 2488 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:10:49.098030 kubelet[2488]: I0213 15:10:49.098002 2488 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:10:49.098129 kubelet[2488]: I0213 15:10:49.098101 2488 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:10:49.100789 kubelet[2488]: I0213 15:10:49.100677 2488 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:10:49.100789 kubelet[2488]: E0213 15:10:49.100713 2488 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:10:49.110327 kubelet[2488]: I0213 15:10:49.110273 2488 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:10:49.111307 kubelet[2488]: I0213 15:10:49.111242 2488 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:10:49.111307 kubelet[2488]: I0213 15:10:49.111264 2488 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:10:49.111307 kubelet[2488]: I0213 15:10:49.111281 2488 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:10:49.111398 kubelet[2488]: E0213 15:10:49.111319 2488 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:10:49.146040 kubelet[2488]: I0213 15:10:49.146012 2488 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:10:49.146040 kubelet[2488]: I0213 15:10:49.146031 2488 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:10:49.146174 kubelet[2488]: I0213 15:10:49.146053 2488 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:10:49.146267 kubelet[2488]: I0213 15:10:49.146208 2488 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:10:49.146267 kubelet[2488]: I0213 15:10:49.146220 2488 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:10:49.146267 kubelet[2488]: I0213 15:10:49.146247 2488 policy_none.go:49] "None policy: Start" Feb 13 15:10:49.146807 kubelet[2488]: I0213 15:10:49.146791 2488 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:10:49.146807 kubelet[2488]: I0213 15:10:49.146817 2488 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:10:49.147041 kubelet[2488]: I0213 15:10:49.147027 2488 state_mem.go:75] "Updated machine memory state" Feb 13 15:10:49.152404 kubelet[2488]: I0213 15:10:49.152381 2488 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:10:49.152621 kubelet[2488]: I0213 15:10:49.152573 2488 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:10:49.152621 kubelet[2488]: I0213 15:10:49.152584 2488 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:10:49.152829 kubelet[2488]: I0213 15:10:49.152804 2488 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:10:49.218326 kubelet[2488]: E0213 15:10:49.218278 2488 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 15:10:49.255226 kubelet[2488]: I0213 15:10:49.254993 2488 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:10:49.261911 kubelet[2488]: I0213 15:10:49.261813 2488 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Feb 13 15:10:49.261911 kubelet[2488]: I0213 15:10:49.261894 2488 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 15:10:49.297247 kubelet[2488]: I0213 15:10:49.297056 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a251915ff17ce9baaa0d10edcd5b646-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2a251915ff17ce9baaa0d10edcd5b646\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:10:49.297247 kubelet[2488]: I0213 15:10:49.297097 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:49.297247 kubelet[2488]: I0213 15:10:49.297115 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:49.297247 kubelet[2488]: I0213 15:10:49.297133 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:49.297247 kubelet[2488]: I0213 15:10:49.297148 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:49.297468 kubelet[2488]: I0213 15:10:49.297163 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:10:49.297468 kubelet[2488]: I0213 15:10:49.297178 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a251915ff17ce9baaa0d10edcd5b646-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2a251915ff17ce9baaa0d10edcd5b646\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:10:49.297468 kubelet[2488]: I0213 15:10:49.297212 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:10:49.297468 kubelet[2488]: I0213 15:10:49.297232 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a251915ff17ce9baaa0d10edcd5b646-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2a251915ff17ce9baaa0d10edcd5b646\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:10:50.085937 kubelet[2488]: I0213 15:10:50.085878 2488 apiserver.go:52] "Watching apiserver" Feb 13 15:10:50.095117 kubelet[2488]: I0213 15:10:50.095080 2488 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:10:50.137218 kubelet[2488]: E0213 15:10:50.137168 2488 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:10:50.150292 kubelet[2488]: I0213 15:10:50.149315 2488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.149301197 podStartE2EDuration="1.149301197s" podCreationTimestamp="2025-02-13 15:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:10:50.149044508 +0000 UTC m=+1.122522887" watchObservedRunningTime="2025-02-13 15:10:50.149301197 +0000 UTC m=+1.122779576" Feb 13 15:10:50.168448 kubelet[2488]: I0213 15:10:50.168356 2488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.168335702 podStartE2EDuration="1.168335702s" podCreationTimestamp="2025-02-13 15:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:10:50.162051463 +0000 UTC m=+1.135529842" watchObservedRunningTime="2025-02-13 15:10:50.168335702 +0000 UTC m=+1.141814081" Feb 13 15:10:50.178973 kubelet[2488]: I0213 15:10:50.178882 2488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.178863838 podStartE2EDuration="3.178863838s" podCreationTimestamp="2025-02-13 15:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:10:50.168543832 +0000 UTC m=+1.142022211" watchObservedRunningTime="2025-02-13 15:10:50.178863838 +0000 UTC m=+1.152342217" Feb 13 15:10:50.512982 sudo[1599]: pam_unix(sudo:session): session closed for user root Feb 13 15:10:50.514251 sshd[1598]: Connection closed by 10.0.0.1 port 54750 Feb 13 15:10:50.514640 sshd-session[1595]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:50.518095 systemd[1]: sshd@4-10.0.0.39:22-10.0.0.1:54750.service: Deactivated successfully. Feb 13 15:10:50.520177 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:10:50.520442 systemd[1]: session-5.scope: Consumed 5.804s CPU time, 223.7M memory peak. Feb 13 15:10:50.521449 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:10:50.522605 systemd-logind[1444]: Removed session 5. Feb 13 15:10:53.872060 kubelet[2488]: I0213 15:10:53.872012 2488 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:10:53.872484 containerd[1461]: time="2025-02-13T15:10:53.872356165Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:10:53.872733 kubelet[2488]: I0213 15:10:53.872705 2488 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:10:54.793419 systemd[1]: Created slice kubepods-besteffort-podff0bbb4a_ee3e_4ba0_b5e1_55a9a1fdb61a.slice - libcontainer container kubepods-besteffort-podff0bbb4a_ee3e_4ba0_b5e1_55a9a1fdb61a.slice. Feb 13 15:10:54.816702 systemd[1]: Created slice kubepods-burstable-pod94188458_f3ff_46e4_a3a5_efa059a9928b.slice - libcontainer container kubepods-burstable-pod94188458_f3ff_46e4_a3a5_efa059a9928b.slice. Feb 13 15:10:54.831204 kubelet[2488]: I0213 15:10:54.831136 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff0bbb4a-ee3e-4ba0-b5e1-55a9a1fdb61a-xtables-lock\") pod \"kube-proxy-v54p6\" (UID: \"ff0bbb4a-ee3e-4ba0-b5e1-55a9a1fdb61a\") " pod="kube-system/kube-proxy-v54p6" Feb 13 15:10:54.831204 kubelet[2488]: I0213 15:10:54.831184 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/94188458-f3ff-46e4-a3a5-efa059a9928b-cni\") pod \"kube-flannel-ds-67k4b\" (UID: \"94188458-f3ff-46e4-a3a5-efa059a9928b\") " pod="kube-flannel/kube-flannel-ds-67k4b" Feb 13 15:10:54.831365 kubelet[2488]: I0213 15:10:54.831217 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr658\" (UniqueName: \"kubernetes.io/projected/94188458-f3ff-46e4-a3a5-efa059a9928b-kube-api-access-mr658\") pod \"kube-flannel-ds-67k4b\" (UID: \"94188458-f3ff-46e4-a3a5-efa059a9928b\") " pod="kube-flannel/kube-flannel-ds-67k4b" Feb 13 15:10:54.831365 kubelet[2488]: I0213 15:10:54.831236 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94188458-f3ff-46e4-a3a5-efa059a9928b-xtables-lock\") pod \"kube-flannel-ds-67k4b\" (UID: \"94188458-f3ff-46e4-a3a5-efa059a9928b\") " pod="kube-flannel/kube-flannel-ds-67k4b" Feb 13 15:10:54.831365 kubelet[2488]: I0213 15:10:54.831255 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ff0bbb4a-ee3e-4ba0-b5e1-55a9a1fdb61a-kube-proxy\") pod \"kube-proxy-v54p6\" (UID: \"ff0bbb4a-ee3e-4ba0-b5e1-55a9a1fdb61a\") " pod="kube-system/kube-proxy-v54p6" Feb 13 15:10:54.831365 kubelet[2488]: I0213 15:10:54.831272 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff0bbb4a-ee3e-4ba0-b5e1-55a9a1fdb61a-lib-modules\") pod \"kube-proxy-v54p6\" (UID: \"ff0bbb4a-ee3e-4ba0-b5e1-55a9a1fdb61a\") " pod="kube-system/kube-proxy-v54p6" Feb 13 15:10:54.831365 kubelet[2488]: I0213 15:10:54.831286 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blbk2\" (UniqueName: \"kubernetes.io/projected/ff0bbb4a-ee3e-4ba0-b5e1-55a9a1fdb61a-kube-api-access-blbk2\") pod \"kube-proxy-v54p6\" (UID: \"ff0bbb4a-ee3e-4ba0-b5e1-55a9a1fdb61a\") " pod="kube-system/kube-proxy-v54p6" Feb 13 15:10:54.831479 kubelet[2488]: I0213 15:10:54.831300 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/94188458-f3ff-46e4-a3a5-efa059a9928b-run\") pod \"kube-flannel-ds-67k4b\" (UID: \"94188458-f3ff-46e4-a3a5-efa059a9928b\") " pod="kube-flannel/kube-flannel-ds-67k4b" Feb 13 15:10:54.831479 kubelet[2488]: I0213 15:10:54.831313 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/94188458-f3ff-46e4-a3a5-efa059a9928b-cni-plugin\") pod \"kube-flannel-ds-67k4b\" (UID: \"94188458-f3ff-46e4-a3a5-efa059a9928b\") " pod="kube-flannel/kube-flannel-ds-67k4b" Feb 13 15:10:54.831479 kubelet[2488]: I0213 15:10:54.831328 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/94188458-f3ff-46e4-a3a5-efa059a9928b-flannel-cfg\") pod \"kube-flannel-ds-67k4b\" (UID: \"94188458-f3ff-46e4-a3a5-efa059a9928b\") " pod="kube-flannel/kube-flannel-ds-67k4b" Feb 13 15:10:55.111026 containerd[1461]: time="2025-02-13T15:10:55.110670567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v54p6,Uid:ff0bbb4a-ee3e-4ba0-b5e1-55a9a1fdb61a,Namespace:kube-system,Attempt:0,}" Feb 13 15:10:55.126620 containerd[1461]: time="2025-02-13T15:10:55.126361585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-67k4b,Uid:94188458-f3ff-46e4-a3a5-efa059a9928b,Namespace:kube-flannel,Attempt:0,}" Feb 13 15:10:55.132297 containerd[1461]: time="2025-02-13T15:10:55.131796512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:10:55.132297 containerd[1461]: time="2025-02-13T15:10:55.132276315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:10:55.132297 containerd[1461]: time="2025-02-13T15:10:55.132288723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:55.132583 containerd[1461]: time="2025-02-13T15:10:55.132391383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:55.150225 containerd[1461]: time="2025-02-13T15:10:55.150136173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:10:55.150389 containerd[1461]: time="2025-02-13T15:10:55.150359865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:10:55.150469 containerd[1461]: time="2025-02-13T15:10:55.150449358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:55.150646 containerd[1461]: time="2025-02-13T15:10:55.150616817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:10:55.151395 systemd[1]: Started cri-containerd-d796a7df2c7233a167fa13b92d089df04cd28d9eead885827f3b9c3fc290f9c9.scope - libcontainer container d796a7df2c7233a167fa13b92d089df04cd28d9eead885827f3b9c3fc290f9c9. Feb 13 15:10:55.170369 systemd[1]: Started cri-containerd-b95d34b1f143f4289229d4acac822b3e4b8f503ca399b2716f0d0d34d28b244e.scope - libcontainer container b95d34b1f143f4289229d4acac822b3e4b8f503ca399b2716f0d0d34d28b244e. Feb 13 15:10:55.174846 containerd[1461]: time="2025-02-13T15:10:55.174814455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v54p6,Uid:ff0bbb4a-ee3e-4ba0-b5e1-55a9a1fdb61a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d796a7df2c7233a167fa13b92d089df04cd28d9eead885827f3b9c3fc290f9c9\"" Feb 13 15:10:55.178688 containerd[1461]: time="2025-02-13T15:10:55.178562826Z" level=info msg="CreateContainer within sandbox \"d796a7df2c7233a167fa13b92d089df04cd28d9eead885827f3b9c3fc290f9c9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:10:55.194751 containerd[1461]: time="2025-02-13T15:10:55.194706512Z" level=info msg="CreateContainer within sandbox \"d796a7df2c7233a167fa13b92d089df04cd28d9eead885827f3b9c3fc290f9c9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"06a15e30f19f7d7efd80ad1b7d15921d3873b6003eb05393117b336d8ccdca3c\"" Feb 13 15:10:55.196252 containerd[1461]: time="2025-02-13T15:10:55.195365301Z" level=info msg="StartContainer for \"06a15e30f19f7d7efd80ad1b7d15921d3873b6003eb05393117b336d8ccdca3c\"" Feb 13 15:10:55.205579 containerd[1461]: time="2025-02-13T15:10:55.205547228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-67k4b,Uid:94188458-f3ff-46e4-a3a5-efa059a9928b,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"b95d34b1f143f4289229d4acac822b3e4b8f503ca399b2716f0d0d34d28b244e\"" Feb 13 15:10:55.207817 containerd[1461]: time="2025-02-13T15:10:55.207712946Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 15:10:55.226385 systemd[1]: Started cri-containerd-06a15e30f19f7d7efd80ad1b7d15921d3873b6003eb05393117b336d8ccdca3c.scope - libcontainer container 06a15e30f19f7d7efd80ad1b7d15921d3873b6003eb05393117b336d8ccdca3c. Feb 13 15:10:55.252296 containerd[1461]: time="2025-02-13T15:10:55.250884699Z" level=info msg="StartContainer for \"06a15e30f19f7d7efd80ad1b7d15921d3873b6003eb05393117b336d8ccdca3c\" returns successfully" Feb 13 15:10:56.154808 kubelet[2488]: I0213 15:10:56.154688 2488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v54p6" podStartSLOduration=2.154672037 podStartE2EDuration="2.154672037s" podCreationTimestamp="2025-02-13 15:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:10:56.154555052 +0000 UTC m=+7.128033431" watchObservedRunningTime="2025-02-13 15:10:56.154672037 +0000 UTC m=+7.128150416" Feb 13 15:10:56.515359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2015948486.mount: Deactivated successfully. Feb 13 15:10:56.547267 containerd[1461]: time="2025-02-13T15:10:56.546712948Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:56.547607 containerd[1461]: time="2025-02-13T15:10:56.547344538Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 15:10:56.548468 containerd[1461]: time="2025-02-13T15:10:56.548431259Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:56.550792 containerd[1461]: time="2025-02-13T15:10:56.550749141Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:56.551798 containerd[1461]: time="2025-02-13T15:10:56.551763742Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.344015656s" Feb 13 15:10:56.551798 containerd[1461]: time="2025-02-13T15:10:56.551796361Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 15:10:56.554124 containerd[1461]: time="2025-02-13T15:10:56.553952473Z" level=info msg="CreateContainer within sandbox \"b95d34b1f143f4289229d4acac822b3e4b8f503ca399b2716f0d0d34d28b244e\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 15:10:56.566080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1923978871.mount: Deactivated successfully. Feb 13 15:10:56.568382 containerd[1461]: time="2025-02-13T15:10:56.568264871Z" level=info msg="CreateContainer within sandbox \"b95d34b1f143f4289229d4acac822b3e4b8f503ca399b2716f0d0d34d28b244e\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"ceb5d2b3e730506c610c7d009c17dbdd0ec0347e1f89e01f7abe470a342438d1\"" Feb 13 15:10:56.568989 containerd[1461]: time="2025-02-13T15:10:56.568963057Z" level=info msg="StartContainer for \"ceb5d2b3e730506c610c7d009c17dbdd0ec0347e1f89e01f7abe470a342438d1\"" Feb 13 15:10:56.596390 systemd[1]: Started cri-containerd-ceb5d2b3e730506c610c7d009c17dbdd0ec0347e1f89e01f7abe470a342438d1.scope - libcontainer container ceb5d2b3e730506c610c7d009c17dbdd0ec0347e1f89e01f7abe470a342438d1. Feb 13 15:10:56.618234 containerd[1461]: time="2025-02-13T15:10:56.617124139Z" level=info msg="StartContainer for \"ceb5d2b3e730506c610c7d009c17dbdd0ec0347e1f89e01f7abe470a342438d1\" returns successfully" Feb 13 15:10:56.631027 systemd[1]: cri-containerd-ceb5d2b3e730506c610c7d009c17dbdd0ec0347e1f89e01f7abe470a342438d1.scope: Deactivated successfully. Feb 13 15:10:56.667273 containerd[1461]: time="2025-02-13T15:10:56.667177268Z" level=info msg="shim disconnected" id=ceb5d2b3e730506c610c7d009c17dbdd0ec0347e1f89e01f7abe470a342438d1 namespace=k8s.io Feb 13 15:10:56.667273 containerd[1461]: time="2025-02-13T15:10:56.667264436Z" level=warning msg="cleaning up after shim disconnected" id=ceb5d2b3e730506c610c7d009c17dbdd0ec0347e1f89e01f7abe470a342438d1 namespace=k8s.io Feb 13 15:10:56.667468 containerd[1461]: time="2025-02-13T15:10:56.667288969Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:10:57.148551 containerd[1461]: time="2025-02-13T15:10:57.148490412Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 15:10:58.492992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4040752679.mount: Deactivated successfully. Feb 13 15:10:59.378955 containerd[1461]: time="2025-02-13T15:10:59.378905315Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:59.380143 containerd[1461]: time="2025-02-13T15:10:59.380048156Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874260" Feb 13 15:10:59.380840 containerd[1461]: time="2025-02-13T15:10:59.380812664Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:59.384629 containerd[1461]: time="2025-02-13T15:10:59.384561853Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:10:59.386906 containerd[1461]: time="2025-02-13T15:10:59.386794751Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 2.238245909s" Feb 13 15:10:59.386906 containerd[1461]: time="2025-02-13T15:10:59.386828887Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 15:10:59.389666 containerd[1461]: time="2025-02-13T15:10:59.389606273Z" level=info msg="CreateContainer within sandbox \"b95d34b1f143f4289229d4acac822b3e4b8f503ca399b2716f0d0d34d28b244e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:10:59.399715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount432368622.mount: Deactivated successfully. Feb 13 15:10:59.401188 containerd[1461]: time="2025-02-13T15:10:59.401146774Z" level=info msg="CreateContainer within sandbox \"b95d34b1f143f4289229d4acac822b3e4b8f503ca399b2716f0d0d34d28b244e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"41bdcb7c45706653d2a12342169c29fe7fb5dafdc8104fad06742e04954d2b49\"" Feb 13 15:10:59.401963 containerd[1461]: time="2025-02-13T15:10:59.401935533Z" level=info msg="StartContainer for \"41bdcb7c45706653d2a12342169c29fe7fb5dafdc8104fad06742e04954d2b49\"" Feb 13 15:10:59.436411 systemd[1]: Started cri-containerd-41bdcb7c45706653d2a12342169c29fe7fb5dafdc8104fad06742e04954d2b49.scope - libcontainer container 41bdcb7c45706653d2a12342169c29fe7fb5dafdc8104fad06742e04954d2b49. Feb 13 15:10:59.464626 systemd[1]: cri-containerd-41bdcb7c45706653d2a12342169c29fe7fb5dafdc8104fad06742e04954d2b49.scope: Deactivated successfully. Feb 13 15:10:59.519262 containerd[1461]: time="2025-02-13T15:10:59.519172696Z" level=info msg="StartContainer for \"41bdcb7c45706653d2a12342169c29fe7fb5dafdc8104fad06742e04954d2b49\" returns successfully" Feb 13 15:10:59.550368 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41bdcb7c45706653d2a12342169c29fe7fb5dafdc8104fad06742e04954d2b49-rootfs.mount: Deactivated successfully. Feb 13 15:10:59.555670 containerd[1461]: time="2025-02-13T15:10:59.555491412Z" level=info msg="shim disconnected" id=41bdcb7c45706653d2a12342169c29fe7fb5dafdc8104fad06742e04954d2b49 namespace=k8s.io Feb 13 15:10:59.555670 containerd[1461]: time="2025-02-13T15:10:59.555554601Z" level=warning msg="cleaning up after shim disconnected" id=41bdcb7c45706653d2a12342169c29fe7fb5dafdc8104fad06742e04954d2b49 namespace=k8s.io Feb 13 15:10:59.555670 containerd[1461]: time="2025-02-13T15:10:59.555563005Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:10:59.565302 kubelet[2488]: I0213 15:10:59.564880 2488 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 15:10:59.593724 systemd[1]: Created slice kubepods-burstable-podd49f6e53_222a_4d04_9687_4b04a69e0218.slice - libcontainer container kubepods-burstable-podd49f6e53_222a_4d04_9687_4b04a69e0218.slice. Feb 13 15:10:59.599961 systemd[1]: Created slice kubepods-burstable-pod6ef2c084_272c_4d07_9e4f_71ab641d80e5.slice - libcontainer container kubepods-burstable-pod6ef2c084_272c_4d07_9e4f_71ab641d80e5.slice. Feb 13 15:10:59.666152 kubelet[2488]: I0213 15:10:59.665971 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d49f6e53-222a-4d04-9687-4b04a69e0218-config-volume\") pod \"coredns-6f6b679f8f-lq8hg\" (UID: \"d49f6e53-222a-4d04-9687-4b04a69e0218\") " pod="kube-system/coredns-6f6b679f8f-lq8hg" Feb 13 15:10:59.666152 kubelet[2488]: I0213 15:10:59.666014 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzxjg\" (UniqueName: \"kubernetes.io/projected/d49f6e53-222a-4d04-9687-4b04a69e0218-kube-api-access-gzxjg\") pod \"coredns-6f6b679f8f-lq8hg\" (UID: \"d49f6e53-222a-4d04-9687-4b04a69e0218\") " pod="kube-system/coredns-6f6b679f8f-lq8hg" Feb 13 15:10:59.666152 kubelet[2488]: I0213 15:10:59.666058 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ef2c084-272c-4d07-9e4f-71ab641d80e5-config-volume\") pod \"coredns-6f6b679f8f-lqkxv\" (UID: \"6ef2c084-272c-4d07-9e4f-71ab641d80e5\") " pod="kube-system/coredns-6f6b679f8f-lqkxv" Feb 13 15:10:59.666328 kubelet[2488]: I0213 15:10:59.666148 2488 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh4b8\" (UniqueName: \"kubernetes.io/projected/6ef2c084-272c-4d07-9e4f-71ab641d80e5-kube-api-access-hh4b8\") pod \"coredns-6f6b679f8f-lqkxv\" (UID: \"6ef2c084-272c-4d07-9e4f-71ab641d80e5\") " pod="kube-system/coredns-6f6b679f8f-lqkxv" Feb 13 15:10:59.898045 containerd[1461]: time="2025-02-13T15:10:59.898002708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lq8hg,Uid:d49f6e53-222a-4d04-9687-4b04a69e0218,Namespace:kube-system,Attempt:0,}" Feb 13 15:10:59.905770 containerd[1461]: time="2025-02-13T15:10:59.905708861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lqkxv,Uid:6ef2c084-272c-4d07-9e4f-71ab641d80e5,Namespace:kube-system,Attempt:0,}" Feb 13 15:11:00.066297 containerd[1461]: time="2025-02-13T15:11:00.066185365Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lqkxv,Uid:6ef2c084-272c-4d07-9e4f-71ab641d80e5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c004055f3882e9147b338b97d134923593ccbb95c0cb11c04fd6956da2b2376\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:11:00.066505 kubelet[2488]: E0213 15:11:00.066471 2488 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c004055f3882e9147b338b97d134923593ccbb95c0cb11c04fd6956da2b2376\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:11:00.066551 kubelet[2488]: E0213 15:11:00.066536 2488 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c004055f3882e9147b338b97d134923593ccbb95c0cb11c04fd6956da2b2376\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-lqkxv" Feb 13 15:11:00.067220 containerd[1461]: time="2025-02-13T15:11:00.067164823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lq8hg,Uid:d49f6e53-222a-4d04-9687-4b04a69e0218,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab5057f0e47a54ccf2b91b040f46bc534c50ce4fb389b8f3fe037586541ab252\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:11:00.067385 kubelet[2488]: E0213 15:11:00.067347 2488 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab5057f0e47a54ccf2b91b040f46bc534c50ce4fb389b8f3fe037586541ab252\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:11:00.067422 kubelet[2488]: E0213 15:11:00.067399 2488 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab5057f0e47a54ccf2b91b040f46bc534c50ce4fb389b8f3fe037586541ab252\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-lq8hg" Feb 13 15:11:00.070164 kubelet[2488]: E0213 15:11:00.070126 2488 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab5057f0e47a54ccf2b91b040f46bc534c50ce4fb389b8f3fe037586541ab252\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-lq8hg" Feb 13 15:11:00.070248 kubelet[2488]: E0213 15:11:00.070217 2488 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c004055f3882e9147b338b97d134923593ccbb95c0cb11c04fd6956da2b2376\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-lqkxv" Feb 13 15:11:00.070322 kubelet[2488]: E0213 15:11:00.070287 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-lqkxv_kube-system(6ef2c084-272c-4d07-9e4f-71ab641d80e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-lqkxv_kube-system(6ef2c084-272c-4d07-9e4f-71ab641d80e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c004055f3882e9147b338b97d134923593ccbb95c0cb11c04fd6956da2b2376\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-lqkxv" podUID="6ef2c084-272c-4d07-9e4f-71ab641d80e5" Feb 13 15:11:00.070322 kubelet[2488]: E0213 15:11:00.070216 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-lq8hg_kube-system(d49f6e53-222a-4d04-9687-4b04a69e0218)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-lq8hg_kube-system(d49f6e53-222a-4d04-9687-4b04a69e0218)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab5057f0e47a54ccf2b91b040f46bc534c50ce4fb389b8f3fe037586541ab252\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-lq8hg" podUID="d49f6e53-222a-4d04-9687-4b04a69e0218" Feb 13 15:11:00.156644 containerd[1461]: time="2025-02-13T15:11:00.156595884Z" level=info msg="CreateContainer within sandbox \"b95d34b1f143f4289229d4acac822b3e4b8f503ca399b2716f0d0d34d28b244e\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 15:11:00.165385 containerd[1461]: time="2025-02-13T15:11:00.165329737Z" level=info msg="CreateContainer within sandbox \"b95d34b1f143f4289229d4acac822b3e4b8f503ca399b2716f0d0d34d28b244e\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"a1b1d767b3a60ed5073bcad241d29cdcd9794c02c4e9152d6529d7db4e464174\"" Feb 13 15:11:00.166708 containerd[1461]: time="2025-02-13T15:11:00.165973012Z" level=info msg="StartContainer for \"a1b1d767b3a60ed5073bcad241d29cdcd9794c02c4e9152d6529d7db4e464174\"" Feb 13 15:11:00.191375 systemd[1]: Started cri-containerd-a1b1d767b3a60ed5073bcad241d29cdcd9794c02c4e9152d6529d7db4e464174.scope - libcontainer container a1b1d767b3a60ed5073bcad241d29cdcd9794c02c4e9152d6529d7db4e464174. Feb 13 15:11:00.216232 containerd[1461]: time="2025-02-13T15:11:00.216098674Z" level=info msg="StartContainer for \"a1b1d767b3a60ed5073bcad241d29cdcd9794c02c4e9152d6529d7db4e464174\" returns successfully" Feb 13 15:11:00.425645 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab5057f0e47a54ccf2b91b040f46bc534c50ce4fb389b8f3fe037586541ab252-shm.mount: Deactivated successfully. Feb 13 15:11:01.336894 systemd-networkd[1399]: flannel.1: Link UP Feb 13 15:11:01.336914 systemd-networkd[1399]: flannel.1: Gained carrier Feb 13 15:11:02.121245 kubelet[2488]: I0213 15:11:02.121153 2488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-67k4b" podStartSLOduration=3.94022951 podStartE2EDuration="8.121136873s" podCreationTimestamp="2025-02-13 15:10:54 +0000 UTC" firstStartedPulling="2025-02-13 15:10:55.206773672 +0000 UTC m=+6.180252011" lastFinishedPulling="2025-02-13 15:10:59.387680995 +0000 UTC m=+10.361159374" observedRunningTime="2025-02-13 15:11:01.179464435 +0000 UTC m=+12.152942854" watchObservedRunningTime="2025-02-13 15:11:02.121136873 +0000 UTC m=+13.094615252" Feb 13 15:11:02.893370 systemd-networkd[1399]: flannel.1: Gained IPv6LL Feb 13 15:11:05.092270 update_engine[1446]: I20250213 15:11:05.091845 1446 update_attempter.cc:509] Updating boot flags... Feb 13 15:11:05.120225 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3133) Feb 13 15:11:05.184296 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3136) Feb 13 15:11:13.113219 containerd[1461]: time="2025-02-13T15:11:13.113099575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lq8hg,Uid:d49f6e53-222a-4d04-9687-4b04a69e0218,Namespace:kube-system,Attempt:0,}" Feb 13 15:11:13.141721 systemd-networkd[1399]: cni0: Link UP Feb 13 15:11:13.141727 systemd-networkd[1399]: cni0: Gained carrier Feb 13 15:11:13.145145 systemd-networkd[1399]: cni0: Lost carrier Feb 13 15:11:13.148307 systemd-networkd[1399]: veth4d0de66b: Link UP Feb 13 15:11:13.150587 kernel: cni0: port 1(veth4d0de66b) entered blocking state Feb 13 15:11:13.150719 kernel: cni0: port 1(veth4d0de66b) entered disabled state Feb 13 15:11:13.150742 kernel: veth4d0de66b: entered allmulticast mode Feb 13 15:11:13.150757 kernel: veth4d0de66b: entered promiscuous mode Feb 13 15:11:13.152661 kernel: cni0: port 1(veth4d0de66b) entered blocking state Feb 13 15:11:13.152708 kernel: cni0: port 1(veth4d0de66b) entered forwarding state Feb 13 15:11:13.153451 kernel: cni0: port 1(veth4d0de66b) entered disabled state Feb 13 15:11:13.164888 kernel: cni0: port 1(veth4d0de66b) entered blocking state Feb 13 15:11:13.164971 kernel: cni0: port 1(veth4d0de66b) entered forwarding state Feb 13 15:11:13.164914 systemd-networkd[1399]: veth4d0de66b: Gained carrier Feb 13 15:11:13.165578 systemd-networkd[1399]: cni0: Gained carrier Feb 13 15:11:13.166967 containerd[1461]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000018938), "name":"cbr0", "type":"bridge"} Feb 13 15:11:13.166967 containerd[1461]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:11:13.183711 containerd[1461]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T15:11:13.183602959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:11:13.183894 containerd[1461]: time="2025-02-13T15:11:13.183748746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:11:13.183894 containerd[1461]: time="2025-02-13T15:11:13.183767710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:13.184527 containerd[1461]: time="2025-02-13T15:11:13.184434473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:13.195882 systemd[1]: run-containerd-runc-k8s.io-92cf1559d18ed1bb2e8186762218b51d98165e924011cdb2d16d75b688940728-runc.0IZCAt.mount: Deactivated successfully. Feb 13 15:11:13.208399 systemd[1]: Started cri-containerd-92cf1559d18ed1bb2e8186762218b51d98165e924011cdb2d16d75b688940728.scope - libcontainer container 92cf1559d18ed1bb2e8186762218b51d98165e924011cdb2d16d75b688940728. Feb 13 15:11:13.218174 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:11:13.236992 containerd[1461]: time="2025-02-13T15:11:13.236958576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lq8hg,Uid:d49f6e53-222a-4d04-9687-4b04a69e0218,Namespace:kube-system,Attempt:0,} returns sandbox id \"92cf1559d18ed1bb2e8186762218b51d98165e924011cdb2d16d75b688940728\"" Feb 13 15:11:13.240343 containerd[1461]: time="2025-02-13T15:11:13.240313916Z" level=info msg="CreateContainer within sandbox \"92cf1559d18ed1bb2e8186762218b51d98165e924011cdb2d16d75b688940728\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:11:13.274451 containerd[1461]: time="2025-02-13T15:11:13.274336681Z" level=info msg="CreateContainer within sandbox \"92cf1559d18ed1bb2e8186762218b51d98165e924011cdb2d16d75b688940728\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b761dd7da4a40e27282b07dd3380a1297f5de05baeb69761773f0a7398d0de1a\"" Feb 13 15:11:13.275090 containerd[1461]: time="2025-02-13T15:11:13.275043891Z" level=info msg="StartContainer for \"b761dd7da4a40e27282b07dd3380a1297f5de05baeb69761773f0a7398d0de1a\"" Feb 13 15:11:13.313376 systemd[1]: Started cri-containerd-b761dd7da4a40e27282b07dd3380a1297f5de05baeb69761773f0a7398d0de1a.scope - libcontainer container b761dd7da4a40e27282b07dd3380a1297f5de05baeb69761773f0a7398d0de1a. Feb 13 15:11:13.337076 containerd[1461]: time="2025-02-13T15:11:13.337036143Z" level=info msg="StartContainer for \"b761dd7da4a40e27282b07dd3380a1297f5de05baeb69761773f0a7398d0de1a\" returns successfully" Feb 13 15:11:14.020503 systemd[1]: Started sshd@5-10.0.0.39:22-10.0.0.1:56736.service - OpenSSH per-connection server daemon (10.0.0.1:56736). Feb 13 15:11:14.090285 sshd[3302]: Accepted publickey for core from 10.0.0.1 port 56736 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:11:14.091963 sshd-session[3302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:14.096301 systemd-logind[1444]: New session 6 of user core. Feb 13 15:11:14.115610 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:11:14.195255 kubelet[2488]: I0213 15:11:14.195150 2488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lq8hg" podStartSLOduration=20.1951159 podStartE2EDuration="20.1951159s" podCreationTimestamp="2025-02-13 15:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:11:14.195042647 +0000 UTC m=+25.168521026" watchObservedRunningTime="2025-02-13 15:11:14.1951159 +0000 UTC m=+25.168594279" Feb 13 15:11:14.273837 sshd[3304]: Connection closed by 10.0.0.1 port 56736 Feb 13 15:11:14.274434 sshd-session[3302]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:14.277832 systemd[1]: sshd@5-10.0.0.39:22-10.0.0.1:56736.service: Deactivated successfully. Feb 13 15:11:14.279752 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:11:14.280457 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:11:14.281352 systemd-logind[1444]: Removed session 6. Feb 13 15:11:14.285377 systemd-networkd[1399]: veth4d0de66b: Gained IPv6LL Feb 13 15:11:15.053407 systemd-networkd[1399]: cni0: Gained IPv6LL Feb 13 15:11:15.113109 containerd[1461]: time="2025-02-13T15:11:15.112963286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lqkxv,Uid:6ef2c084-272c-4d07-9e4f-71ab641d80e5,Namespace:kube-system,Attempt:0,}" Feb 13 15:11:15.131437 systemd-networkd[1399]: vethc7ac051b: Link UP Feb 13 15:11:15.132830 kernel: cni0: port 2(vethc7ac051b) entered blocking state Feb 13 15:11:15.132883 kernel: cni0: port 2(vethc7ac051b) entered disabled state Feb 13 15:11:15.132895 kernel: vethc7ac051b: entered allmulticast mode Feb 13 15:11:15.134211 kernel: vethc7ac051b: entered promiscuous mode Feb 13 15:11:15.144281 kernel: cni0: port 2(vethc7ac051b) entered blocking state Feb 13 15:11:15.144329 kernel: cni0: port 2(vethc7ac051b) entered forwarding state Feb 13 15:11:15.144307 systemd-networkd[1399]: vethc7ac051b: Gained carrier Feb 13 15:11:15.145761 containerd[1461]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} Feb 13 15:11:15.145761 containerd[1461]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:11:15.160205 containerd[1461]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T15:11:15.160089018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:11:15.160205 containerd[1461]: time="2025-02-13T15:11:15.160159870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:11:15.160205 containerd[1461]: time="2025-02-13T15:11:15.160179913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:15.160369 containerd[1461]: time="2025-02-13T15:11:15.160275808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:15.183365 systemd[1]: Started cri-containerd-7cf3984ace1ad4a432b98ea150317c1e9c7904398b9c8caf8bfd3f1ded26aa01.scope - libcontainer container 7cf3984ace1ad4a432b98ea150317c1e9c7904398b9c8caf8bfd3f1ded26aa01. Feb 13 15:11:15.194099 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:11:15.210787 containerd[1461]: time="2025-02-13T15:11:15.210748203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lqkxv,Uid:6ef2c084-272c-4d07-9e4f-71ab641d80e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cf3984ace1ad4a432b98ea150317c1e9c7904398b9c8caf8bfd3f1ded26aa01\"" Feb 13 15:11:15.213520 containerd[1461]: time="2025-02-13T15:11:15.213414396Z" level=info msg="CreateContainer within sandbox \"7cf3984ace1ad4a432b98ea150317c1e9c7904398b9c8caf8bfd3f1ded26aa01\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:11:15.228508 containerd[1461]: time="2025-02-13T15:11:15.228458319Z" level=info msg="CreateContainer within sandbox \"7cf3984ace1ad4a432b98ea150317c1e9c7904398b9c8caf8bfd3f1ded26aa01\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc2918386ba25462c5c5a1354b08a332d48eed8f4ad99575031809aae175c4ae\"" Feb 13 15:11:15.229153 containerd[1461]: time="2025-02-13T15:11:15.229120146Z" level=info msg="StartContainer for \"bc2918386ba25462c5c5a1354b08a332d48eed8f4ad99575031809aae175c4ae\"" Feb 13 15:11:15.251350 systemd[1]: Started cri-containerd-bc2918386ba25462c5c5a1354b08a332d48eed8f4ad99575031809aae175c4ae.scope - libcontainer container bc2918386ba25462c5c5a1354b08a332d48eed8f4ad99575031809aae175c4ae. Feb 13 15:11:15.275488 containerd[1461]: time="2025-02-13T15:11:15.275446588Z" level=info msg="StartContainer for \"bc2918386ba25462c5c5a1354b08a332d48eed8f4ad99575031809aae175c4ae\" returns successfully" Feb 13 15:11:16.198582 kubelet[2488]: I0213 15:11:16.198151 2488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lqkxv" podStartSLOduration=22.198134824 podStartE2EDuration="22.198134824s" podCreationTimestamp="2025-02-13 15:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:11:16.197152834 +0000 UTC m=+27.170631213" watchObservedRunningTime="2025-02-13 15:11:16.198134824 +0000 UTC m=+27.171613203" Feb 13 15:11:16.525349 systemd-networkd[1399]: vethc7ac051b: Gained IPv6LL Feb 13 15:11:19.288752 systemd[1]: Started sshd@6-10.0.0.39:22-10.0.0.1:56752.service - OpenSSH per-connection server daemon (10.0.0.1:56752). Feb 13 15:11:19.334433 sshd[3463]: Accepted publickey for core from 10.0.0.1 port 56752 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:11:19.335773 sshd-session[3463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:19.339773 systemd-logind[1444]: New session 7 of user core. Feb 13 15:11:19.347362 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:11:19.459598 sshd[3465]: Connection closed by 10.0.0.1 port 56752 Feb 13 15:11:19.460267 sshd-session[3463]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:19.463692 systemd[1]: sshd@6-10.0.0.39:22-10.0.0.1:56752.service: Deactivated successfully. Feb 13 15:11:19.465405 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:11:19.467716 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:11:19.468649 systemd-logind[1444]: Removed session 7. Feb 13 15:11:24.492599 systemd[1]: Started sshd@7-10.0.0.39:22-10.0.0.1:55662.service - OpenSSH per-connection server daemon (10.0.0.1:55662). Feb 13 15:11:24.536363 sshd[3501]: Accepted publickey for core from 10.0.0.1 port 55662 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:11:24.538065 sshd-session[3501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:24.542655 systemd-logind[1444]: New session 8 of user core. Feb 13 15:11:24.554427 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:11:24.683250 sshd[3503]: Connection closed by 10.0.0.1 port 55662 Feb 13 15:11:24.683632 sshd-session[3501]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:24.692474 systemd[1]: sshd@7-10.0.0.39:22-10.0.0.1:55662.service: Deactivated successfully. Feb 13 15:11:24.694634 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:11:24.695419 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:11:24.705537 systemd[1]: Started sshd@8-10.0.0.39:22-10.0.0.1:55666.service - OpenSSH per-connection server daemon (10.0.0.1:55666). Feb 13 15:11:24.706816 systemd-logind[1444]: Removed session 8. Feb 13 15:11:24.749471 sshd[3517]: Accepted publickey for core from 10.0.0.1 port 55666 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:11:24.750226 sshd-session[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:24.755251 systemd-logind[1444]: New session 9 of user core. Feb 13 15:11:24.763380 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:11:24.908398 sshd[3520]: Connection closed by 10.0.0.1 port 55666 Feb 13 15:11:24.909394 sshd-session[3517]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:24.918371 systemd[1]: sshd@8-10.0.0.39:22-10.0.0.1:55666.service: Deactivated successfully. Feb 13 15:11:24.921749 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:11:24.923979 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:11:24.932523 systemd[1]: Started sshd@9-10.0.0.39:22-10.0.0.1:55672.service - OpenSSH per-connection server daemon (10.0.0.1:55672). Feb 13 15:11:24.934866 systemd-logind[1444]: Removed session 9. Feb 13 15:11:24.974017 sshd[3531]: Accepted publickey for core from 10.0.0.1 port 55672 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:11:24.975364 sshd-session[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:24.980426 systemd-logind[1444]: New session 10 of user core. Feb 13 15:11:24.990452 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:11:25.107119 sshd[3534]: Connection closed by 10.0.0.1 port 55672 Feb 13 15:11:25.108696 sshd-session[3531]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:25.114181 systemd[1]: sshd@9-10.0.0.39:22-10.0.0.1:55672.service: Deactivated successfully. Feb 13 15:11:25.117738 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:11:25.119394 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:11:25.120778 systemd-logind[1444]: Removed session 10. Feb 13 15:11:30.123677 systemd[1]: Started sshd@10-10.0.0.39:22-10.0.0.1:55686.service - OpenSSH per-connection server daemon (10.0.0.1:55686). Feb 13 15:11:30.174563 sshd[3574]: Accepted publickey for core from 10.0.0.1 port 55686 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:11:30.175816 sshd-session[3574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:30.184219 systemd-logind[1444]: New session 11 of user core. Feb 13 15:11:30.185362 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:11:30.319875 sshd[3576]: Connection closed by 10.0.0.1 port 55686 Feb 13 15:11:30.320335 sshd-session[3574]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:30.338481 systemd[1]: sshd@10-10.0.0.39:22-10.0.0.1:55686.service: Deactivated successfully. Feb 13 15:11:30.340103 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:11:30.340969 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:11:30.351537 systemd[1]: Started sshd@11-10.0.0.39:22-10.0.0.1:55698.service - OpenSSH per-connection server daemon (10.0.0.1:55698). Feb 13 15:11:30.353536 systemd-logind[1444]: Removed session 11. Feb 13 15:11:30.400869 sshd[3589]: Accepted publickey for core from 10.0.0.1 port 55698 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:11:30.402737 sshd-session[3589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:30.408249 systemd-logind[1444]: New session 12 of user core. Feb 13 15:11:30.417360 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:11:30.623550 sshd[3592]: Connection closed by 10.0.0.1 port 55698 Feb 13 15:11:30.624295 sshd-session[3589]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:30.642984 systemd[1]: sshd@11-10.0.0.39:22-10.0.0.1:55698.service: Deactivated successfully. Feb 13 15:11:30.645318 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:11:30.646234 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:11:30.660153 systemd[1]: Started sshd@12-10.0.0.39:22-10.0.0.1:55714.service - OpenSSH per-connection server daemon (10.0.0.1:55714). Feb 13 15:11:30.661738 systemd-logind[1444]: Removed session 12. Feb 13 15:11:30.712017 sshd[3602]: Accepted publickey for core from 10.0.0.1 port 55714 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:11:30.713342 sshd-session[3602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:30.718349 systemd-logind[1444]: New session 13 of user core. Feb 13 15:11:30.732411 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:11:32.009336 sshd[3605]: Connection closed by 10.0.0.1 port 55714 Feb 13 15:11:32.010347 sshd-session[3602]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:32.019651 systemd[1]: sshd@12-10.0.0.39:22-10.0.0.1:55714.service: Deactivated successfully. Feb 13 15:11:32.022887 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:11:32.025137 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:11:32.034841 systemd[1]: Started sshd@13-10.0.0.39:22-10.0.0.1:55716.service - OpenSSH per-connection server daemon (10.0.0.1:55716). Feb 13 15:11:32.036118 systemd-logind[1444]: Removed session 13. Feb 13 15:11:32.081765 sshd[3643]: Accepted publickey for core from 10.0.0.1 port 55716 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:11:32.083122 sshd-session[3643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:32.087070 systemd-logind[1444]: New session 14 of user core. Feb 13 15:11:32.094372 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:11:32.331466 sshd[3646]: Connection closed by 10.0.0.1 port 55716 Feb 13 15:11:32.332528 sshd-session[3643]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:32.345362 systemd[1]: sshd@13-10.0.0.39:22-10.0.0.1:55716.service: Deactivated successfully. Feb 13 15:11:32.347438 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:11:32.348816 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:11:32.356545 systemd[1]: Started sshd@14-10.0.0.39:22-10.0.0.1:55728.service - OpenSSH per-connection server daemon (10.0.0.1:55728). Feb 13 15:11:32.357760 systemd-logind[1444]: Removed session 14. Feb 13 15:11:32.398703 sshd[3657]: Accepted publickey for core from 10.0.0.1 port 55728 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:11:32.400114 sshd-session[3657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:32.404866 systemd-logind[1444]: New session 15 of user core. Feb 13 15:11:32.417415 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:11:32.533013 sshd[3660]: Connection closed by 10.0.0.1 port 55728 Feb 13 15:11:32.533406 sshd-session[3657]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:32.536920 systemd[1]: sshd@14-10.0.0.39:22-10.0.0.1:55728.service: Deactivated successfully. Feb 13 15:11:32.538831 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:11:32.539861 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:11:32.540828 systemd-logind[1444]: Removed session 15. Feb 13 15:11:37.548529 systemd[1]: Started sshd@15-10.0.0.39:22-10.0.0.1:33454.service - OpenSSH per-connection server daemon (10.0.0.1:33454). Feb 13 15:11:37.599507 sshd[3697]: Accepted publickey for core from 10.0.0.1 port 33454 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:11:37.600808 sshd-session[3697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:37.605478 systemd-logind[1444]: New session 16 of user core. Feb 13 15:11:37.618393 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:11:37.735422 sshd[3699]: Connection closed by 10.0.0.1 port 33454 Feb 13 15:11:37.736092 sshd-session[3697]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:37.739380 systemd[1]: sshd@15-10.0.0.39:22-10.0.0.1:33454.service: Deactivated successfully. Feb 13 15:11:37.741046 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:11:37.743986 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:11:37.745709 systemd-logind[1444]: Removed session 16. Feb 13 15:11:42.747444 systemd[1]: Started sshd@16-10.0.0.39:22-10.0.0.1:37628.service - OpenSSH per-connection server daemon (10.0.0.1:37628). Feb 13 15:11:42.792438 sshd[3733]: Accepted publickey for core from 10.0.0.1 port 37628 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:11:42.793817 sshd-session[3733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:42.798513 systemd-logind[1444]: New session 17 of user core. Feb 13 15:11:42.805402 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:11:42.924167 sshd[3735]: Connection closed by 10.0.0.1 port 37628 Feb 13 15:11:42.924539 sshd-session[3733]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:42.927943 systemd[1]: sshd@16-10.0.0.39:22-10.0.0.1:37628.service: Deactivated successfully. Feb 13 15:11:42.930016 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:11:42.930889 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:11:42.931908 systemd-logind[1444]: Removed session 17. Feb 13 15:11:47.935588 systemd[1]: Started sshd@17-10.0.0.39:22-10.0.0.1:37640.service - OpenSSH per-connection server daemon (10.0.0.1:37640). Feb 13 15:11:47.991134 sshd[3769]: Accepted publickey for core from 10.0.0.1 port 37640 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:11:47.992532 sshd-session[3769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:47.996809 systemd-logind[1444]: New session 18 of user core. Feb 13 15:11:48.008389 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:11:48.123152 sshd[3771]: Connection closed by 10.0.0.1 port 37640 Feb 13 15:11:48.124929 sshd-session[3769]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:48.128173 systemd[1]: sshd@17-10.0.0.39:22-10.0.0.1:37640.service: Deactivated successfully. Feb 13 15:11:48.131610 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:11:48.134089 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:11:48.135073 systemd-logind[1444]: Removed session 18.